Hacker News new | past | comments | ask | show | jobs | submit login
Manifesto for Responsible Software Development (responsiblesoftware.org)
73 points by jraedisch on Nov 18, 2016 | hide | past | favorite | 55 comments



ACM and IEEE collaborated and agreed on a Software Engineering Code of Ethics decades ago

http://www.acm.org/about/se-code

Can we not reinvent this particular wheel please? Codes of Ethics aren't JavaScript frameworks.


The National Society for Professional Engineers, which, among other things, is the body that runs the exams for licensing professional engineers in the United States, has a similar code of ethics:

https://www.nspe.org/resources/ethics/code-ethics


And a professional licensing infrastructure (hint hint)


Yes! If a person has taken an ethics course as part of a computer science degree, they would know this


How many computer science degrees include an ethics course?

That's not a sarcastic question, I genuinely don't know. It seems to be less common than in other engineering disciplines, probably because the harm of misused civil or environmental engineering is so much more blatant. (And perhaps, because so much of professional software - like tracking ad networks - is rather grey morally.)


Why shouldn't there be many codes along with lively discussion between supporters? And even if there was only one true "Code of Ethics" (which seems dangerous to me), there could still be different representations.


I think there's nothing wrong with multiple competing codes, just like there are many different open source licenses.

But I am concerned that right now they're all crying in the wilderness - if two codes are pretty similar, it might be worth burying our disagreements over them until there's some kind of actual conversation or acceptance of any standard at all.


There should be as many ethics codes as people feel the need to. Better that they commit honestly to a different code that we can count on them honoring their signature, than they half-hearthedly sign something they have no intention of accepting.

Good, honest and decent people can disagree on what is morally right.


> I will not develop software that is intended to violate human rights and civil liberties.

This one doesn't work especially in regards to cryptography. For example, if I develop the greatest end-to-end cryptography, I can almost guarantee this will be used by both good and bad actors. There's nothing I can do about that.

IMO, a manifesto like this should really be much closer to the hippocratic oath; which is to say that my responsibility belongs with the patient, disregarding personal feelings toward the patient. So for us, that should be to write code that does the best job that it can; b/c at the end of the day we can't control who uses it.


I would argue that cryptography wouldn't violate that aspect. The intent isn't to violate human rights or civil liberties, even though it can certainly be used that way. So too can a text editor or a printer driver or scheduling software. I think you're being a bit broad.

To me, it reads fairly clearly as a stand against code whose specific intent is unethical, not it's potential and unrelated uses.


Yeah, I get the intent argument, but I don't think it has a good separation of concerns, for lack of a better term. It's written in such a way as to state that if I know it will be used for bad purposes, then I should not write the software. And as you stated, even a text-editor could be interpreted that way.

I could even turn this around in a different way, let's assume I decide that I want to help catch bad people. I do this by writing some software that helps deanonymize connections. This can be used to help stop DDOS attacks, track down sex traffickers, etc. So it's intent is good, but it will also be used by Iran, Syria, Russia, China, USA et al. to track down dissidents.

I believe this software should be unethical by these standards, b/c of it's potential misuse/abuse, it was written with good intent! So is that unethical or not?

The reason that I dislike that rule, is that it places the use/abuse on the developer, it's really the operator of the software that is at fault. Right?


> The reason that I dislike that rule, is that it places the use/abuse on the developer, it's really the operator of the software that is at fault. Right?

I think that in practice I disagree with this. Certainly final moral responsibility lies with whoever misuses a tool, but on a practical level people should be aware of the primary or predictable results of their work.

If you create a cryptography system and release it publicly, you should be aware that it will be used by people hiding unethical things. If you build a deanonymization tool, you should be aware that it will be used for surveillance by people with ill intent. These are statistical certainties - if your tool is good, it will be used in these ways, and you can't claim surprise when it happens. It's like the stochastic terrorism question, where you can't know what an individual will do but you can easily predict that a system will produce violence somewhere.

None of this means you shouldn't do those things. Inventing TNT wasn't evil just because it's been used for violence, and when it comes to something like cryptography there's a real case for "this will be built eventually, so we have to live with it". With tracking, I sometimes feel the moral question is greyer, especially since the results are system-dependent and not 'inevitable'.

So yes, build these things, and accept that they'll be used for all sorts of purposes. But do consider the risks, and be aware of degree of harm. Building a tracking system that Iran might use someday is very different from building one for Iran, knowing what will be done with it.


I don't think the Hippocratic Oath model quite applies to software. The purpose of the medical profession is to preserve life, a basic human right. Even a wounded mass murderer deserves decent medical care before being brought to trial. But that's very different from saying a mass murderer deserves a platform, deserves infrastructure, deserves tooling. Infrastructure is important (and I am in favor of it personally, especially good end-to-end cryptography), but it starts from a point of being morally neutral given no other information, whereas medical care starts from a point of being morally good given no other information. A lot of uses of infrastructure end up being morally good. A lot, like mass surveillance tech, end up being morally bad, and the fact that they can in theory be used by "the good guys" (or for amoral goals like ad targeting) don't swing that back to positive.


Human rights and civil liberties are (conceptually) granted by the state and are not something a citizen can or could "take" from another citizen. Points like this are usually directed in that direction, in that they mean involvement in state (or the military-intelligence-industrial-complex) projects aimed at taking these rights.


Under Enlightenment era philosophy, human rights are innate (or equivalently, granted by God). The state may or may not acknowledge those rights, but they don't create them.

This is literally the first thing you read in the very first founding document of the U.S. (the Declaration of Independence): "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights".


"I will not develop software that is intended to violate human rights and civil liberties." is a nice concept but every country's idea of what these things is both highly overloaded and mutable. What you find a violation another programmer might find acceptable. Of course you may be free to agree and someone else might argue the opposite.


"What you find a violation another programmer might find acceptable"

That is exactly our intention. We want every one to think about and reflect his or her own values. We believe there can't be the single one truth for all. But the real danger is, where people don't think at all about what's right or wrong.


Do you really think that this is needed? Are developers not operating within their own ethical guidelines? Seems weird to assume we/they aren't.


There were some Yahoo employees who implemented the firehose delivering their users' data to the government. Considering the general political climate in the US tech scene, I can't believe they thought they were doing something good. They either didn't think about it at all, or decided their job was more important to them.

Symbolics like this pledge might help with the former, because it raises the idea that software may have an ethical component.

For the latter: its kind of sad that these people at yahoo – being incredibly more employable than average – still couldn't muster the courage to say no. Could've done it quietly and found a job somewhere else. Could've done it publicly and be a hero (and probably still find a job – some companies out there would've wanted to support them, or use their hiring as marketing).


> For the latter: its kind of sad that these people at yahoo – being incredibly more employable than average – still couldn't muster the courage to say no. Could've done it quietly and found a job somewhere else.

For this tactic to achieve its goal (that the software doesn't get written), you'd have to ensure that literally zero developers now and in the future would be willing to write the code. As was discussed in a previous thread [1], when faced with the prospect of writing "unethical code" the consensus seems to be even if a few developers take an ethical stand, there will always be some other developer out there without such strong convictions willing to write the code instead.

I'd also like to point out that being employable does not necessarily mean you're in the position where you can put your job on the line for something. I consider myself an "employable" software guy but I've got bills to pay, and 3 months of unemployment while I find my next job is not exactly compatible with that.

1. https://news.ycombinator.com/item?id=12965589


Obviously a vocal departure is the easy way to enforce public outcry, but I'm not sure a quiet one is as useless as this suggests.

There are certainly some fundamentally-broken things that will happen as long as anyone is willing to do them - I don't expect spammers and ad fraudsters to disappear because a few programmers quit in a burst of ethics. But Yahoo wasn't dependent on unethical behavior, they had a (somewhat) viable business doing reasonable work.

Those are the companies susceptible to quiet moral stands. They're not dependent on ethical lapses to survive, so they can push back on them and continue to exist. And they're big, wealthy companies that spend lots of money on getting good programmers, so if some of those talented programmers say "screw this" and quit, it's a problem even if they can replace the missing bodies.

None of this addresses the last point - I don't expect people to starve over this, and I imagine most would stick around until they had a new job lined up. But I do think that pushing back on these companies can help even if there are people willing to do the dirty work.


> Considering the general political climate in the US tech scene, I can't believe they thought they were doing something good. They either didn't think about it at all, or decided their job was more important to them.

Everything is more complicated than this.


At the very least it can be nice to have something to point at in disputes. My own implicit ethical guidelines are also always in struggle with other impulses/motivations/fears/etc. so they might need some help, and possibly refinement.


This is a nice feel-good but utterly meaningless.


Seems to have good ideas, but focuses on the negative. Here are some high-minded principles about what sort of software we should be building: http://www.loper-os.org/?p=284


I will not sign the manifesto. I prefer to do whatever I want with software, thanks.


We call this the Chaotic Neutral Manifesto, and it's grand.


It's not a bad analogy; a software engineer is in many ways modern society's equivalent of a spellcaster.

And, while a high-level chaotic neutral spellcaster might be great to have in your party, it is quite legitimate for the average village to be terrified of one who's just getting a drink in the tavern.


We could write a manifesto for that, or not and just get back to our lives


2 is problematic, because it applies to a moving target of "human rights and civil liberties." I may agree with society's definition of these terms today, but the world changes, and with it the definition of "human rights." And while 4 is applicable for many programmers, it's not even close to universal enough to warrant a blanket pledge.


A manifesto is not really useful unless it's enforceable. I can sign this but I'm still compelled by my employer to act as they say with the threat that I could lose my job if I disagree and they will hire someone who will do it anyway.

My employer wouldn't do that I'm sure but it's no reassurance to me or anyone else. A professional licensing organization has more weight. I would be beholden to the manifesto and would have an organization with the weight and resources to litigate if my employer tries to fire me to get around my ethical responsibilities.

Fortunately there is one such program developing in the country in which I live and I'm doing what I can to pursue the education requirements to be licensed. I believe that liability and professional responsibility are going to become necessary to develop many kinds of commercial software.


Too bad a manifesto that states "I will not waste the most valuable resource: time" can't exist


Hi,

I just saw that someone posted our manifesto here. Thank you!

If you have any questions, please ask me here. I will try to answer them.

Best regards, Nils


I'm with the earlier commenter noting the code of ethics from IEEE & ACM. I think there are aspects of human life not being addressed in either their set or yours, though.

What's missing:

- minimizing attention required

- preserving human connection

- digital addiction prevention, detection, mitigation

- ethical software that knows how we're feeling (we need to start thinking about this one now before the tech becomes ubiquitous)

The problem I see with each of these is that we have very few design processes for addressing them, so asking people to focus on them leads to continued naive harm due to a lack of understanding. Most social media serves to connect humans, but actual human connection is poorly maintained through it.

I'm working on a scientific theory of design that combines neuroscience with mindfulness & design thinking. The list I mentioned above are the core problems in trying to tackle with it.

I'm curious...what are your thoughts on all this as they apply to the manifesto?


First, thanks for setting this up!

It seems like everything in the manifesto boils down to #1 ("I will act according to my conscience"), because everything else is vague enough to be debatable.

Some engineering disciplines augment high-level oaths (such as this one) with concrete, enforceable codes of conduct. They also put in place a barrier to entry, so that phrases such as "will do my very best" carry with them the force of assumed competence -- i.e., there's a limit on precisely how bad "best" is allowed to be.

Do you support the professionalization of Software Engineering? Have we reached a point where software is important and pervasive enough that legally enforceable professional codes of conduct are now reasonable?

I ask because it seems odd to me that someone would think something like this pledge is necessary and good, but also not support treating Software Engineering as a proper engineering discipline. So I'm very interested in your reasoning, especially if your answer to the above question is "no".

Again, thanks!


"It seems like everything in the manifesto boils down to #1 ("I will act according to my conscience")"

Yes

I think software engineering has reached this part of professionalism at many places. There, these specific codes of conduct are already in place (DO-178b, IEC 61508, ...).

But software is reaching more and more places and the barriers are lower every day. I believe that we need the discussion about ethics across all layers of professionalism.

This manifesto is of course not enforable. But it is meant as a basis for discussion.


I would like there to be some promise of not wasting people's attention. Or maybe that is one of the "resources"?


I'm developing a scientific theory of design that includes preserving attention. I'd like to hear more of your thoughts on this.


Nice. I've had some thoughts related to this idea. How do you test such theory?

I think there is some profound idea lurking where "cache-misses" and convenience are overlapping. We can only keep so much information in our short term memory. The more convenient an interface is the more space is left for information that is strictly important for the task that the user is trying to solve. Also, as humans I believe we use association for populating our "cache". So if we have an intuitive interface it helps us to "pre-cache" information because it will trigger the right associations at the right time. Distractions will trigger the wrong things to be "cached" by association and this will slow us down.

Yet another critical factor is derailment. Or the opposite of being in the zone. If we miss the cache too much we tend to start analyzing ourselves rather than the problem. A second layer of distraction occurs. Or perhaps avoidance behaviors are triggered which further distracts us.


The current plan is to build an emotion recognition machine (http://eqradio.csail.mit.edu) and experiment on myself, combining that input with video and activity tracking.


Did you see this article? https://medium.com/swlh/how-technology-hijacks-peoples-minds...

Apart from that I'll gladly try to write down my own thoughts, but that might take a while.

Is the progress you are making accessible somewhere?


I was really excited when I read that article a few weeks ago because I'd like to find more people working on this.

What I'm working on only just started taking shape a couple weeks ago, so all I have are some pages in a notebook. I've been laying out observations & hypotheses. Over the last few days, I've started looking into the structures of scientific theories and will be typing up my notes starting today. It'll all go into a repo under this org:

https://github.com/mindfully


Do you mean something like http://timewellspent.io?


What does not collecting too much data mean in practice? It seems a lot of analytics would be in the grey area, esp. when trying doing reverse geo lookups. A more simple example: can I use Apache which gathers people's IP addresses, browser info, referring links if I don't necessarily know how I'll use it?


An example: I have a website and I don't need to know the IP addresses, browser info or referrer urls of visiting User Agents. So allowing Apache to log that information would be "collecting too much data".

Thus, Apache is configured with the following LogFormat[0] which associates an explicit format[1] with a nickname (here, "essential"):-

    LogFormat "%{%FT%T%z}t \"%r\" %>s %B %D %L" essential
and in VirtualHost contexts, a CustomLog[2]:-

    CustomLog ${APACHE_LOG_DIR}/access.log essential

[0]: https://httpd.apache.org/docs/2.4/mod/mod_log_config.html#lo...

[1]: https://httpd.apache.org/docs/2.4/mod/mod_log_config.html#fo...

[2]: https://httpd.apache.org/docs/2.4/mod/mod_log_config.html#cu...


You can get in trouble in the EU if you log IP addresses without explicit consent:

https://www.eff.org/deeplinks/2010/06/european-officials-goo...


Interesting, so basically everyone running web servers with default settings is breaking the law.


Which leads us to the unfortunate lack of a good "don't provoke avoidable user agreement inflation" manifesto. Perfect hiding place for "agreeing" to the much more nasty stuff.


When you need the data to fulfill your business goal, it is ok (as long as your users know that you collect this type of data).

What we encourage to avoid is "collecting everything, just in case..."

As long as you anonymize things, analytics are fine :)


IP addresses have been ruled to be "personal identifying information" in Europe, which is why Googla analytics offers an option to anonymize the last set of digits.


> I will do my very best to prevent the waste of energy and resources What determines whether something is a waste? Is writing a game that use lots of power on GPUs a waste, because it doesn't further society?


Could you reduce the power consumption by just optimizing the most critical parts? If yes: Not doing it would be waste in my view. Is writing or playing a game waste? No (in my view).


Gaming is a human necessity, in my book.

I read it to mean "be efficient." For example, if your "frivolous" game requires an army of servers devoted to processing data, try to optimize their effort and be clever about scheduling uptime.


Lol @ the sql injection attempt


I wish I had the moral fortitude to sign and stick to this manifesto.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: