Hacker News new | past | comments | ask | show | jobs | submit login
Should computer science researchers disclose negative social consequences? (nature.com)
252 points by rbanffy on Aug 3, 2018 | hide | past | favorite | 270 comments



I've realized there's another thing about this proposal that bothers me: it's basically waterfall software development. Agile is by no means a panacea, but there's a reason it replaced the "specify in advance what everything will need to look like and how it will work together" method. Instead, you need to just start building pieces that move you towards your eventual goal, roughly envisioned, and know that you will need to iterate on that frequently to respond to what you learn along the way. The motivation of this proposal seems to be that we can know in advance what the impact of a new piece of technology is, and therefore we can just avoid ever developing the bad stuff, so we won't have to deal with the hard work of changing it later.

But, we cannot, and it's dangerous to think we can. After we develop technology, we will be surprised at how it is used and what the impact is, and we will have to respond to that, sometimes with more technology and sometimes with legislation and sometimes with changes in social mores. Thinking that developers can just anticipate in advance what will happen, is at best delusional, like thinking that if we spend more time on the specifications part of our next waterfall project, we can do better.

Moreover, if Tim Berners-Lee had specified in advance, "this world-wide web thing will probably eviscerate all advertising-supported media, from newspapers to magazines to radio and television", I don't know that we would have made wiser choices as a result of that.


What brother's me about this comment is the implicit assumption that waterfall is a universal bad and things that are less waterfall more agile are good/better.

This is incredibly false. Agile (or non-waterfall) style development is useful in specific limited cases. Where you're creating something where the cost of failure discovery is small. You're writing a mobile app, or a b2b website, or some other common problem. It's cheap and easy, and the cost of a failure is so small that it's cheaper to just fail and go from there, then to spend anytime thinking about how something may fail.

If you're doing something mission critical, space shuttle, something involving expensive parts, something high risk, you make sure you do thorough requirements analysis, well specified behavior, and well defined failure modes. You break the system down into individual small components (the more critical the failure the smaller the components) create well defined specifications for each component, test the he'll out of each one indicidually, ensure they work to spec and then combine them. You don't Agile your way through it.

Agile, you'd just get something minimal working end to end and iterate. If the thing you're working on can explode you don't just iterate through a minimal user story. You define each item well and ensure it meets it's specs and you understand it's failure modes.

Same thing for the topic at had on impact of technology. Understand the failure modes of the components you're creating - and write them in your paper.


I present you SpaceX as an amazing counterexample to "agile don't work for things that explode".

I would much rather sit in a rocket that has blown up a 100 times on the launch pad but launched succesfully a thousand times more with each time being an opportunity to improve it, than one that has been engineered with waterfall and never flown...

https://spacenews.com/pentagon-advisory-panel-dod-could-take...


Of course no one can predict the future perfectly, but we can at least think about it with some degree of accuracy based on tests and trials.

As our technology is becoming so much more powerful and evolving at exponential rates, don't you think it might be a good idea to think a little harder about the how things could backfire?


One shouldn't constrain development, once the genie is out of the bottle someone will develop it, whatever it is.

These considerations should be taken into account when technology is used, sometimes by policy makers in government.

There's no way industry will police itself, they've proven time and again that profit is far more important that humans.


At the extreme, there are all kinds of evil ideas that are technically possible but not widely implemented. This can be seen by how easy terrorism is compared to how infrequently it is carried out.

I'd rather not list out a bunch of evil ideas, but I think any engineer with a bit of thought can come up with plenty.


> I think any engineer with a bit of thought can come up with plenty

I think you're arguing the article's point. Many knowledgeable people can come up with dangerous things. Many people have bad intentions. Luckily for society, there often isn't much overlap of those two groups.

This paper is saying "Hey people who have ideas, don't just think through the technical idea you have. Think of how someone with negative intent might be able to leverage this idea to easily do something they couldn't do before. Or even how it could unintentionally happen."


I find this new trend of trying to predict how technology will be used - and presuming its users will either be evil or too stupid to use it responsibly incredibly patronizing.

Engineers are not philosophers and should not be placed in this role. We do not have the tools to do it. As with any human we have a duty to consider first order effects - but even the most astute philosopher will have trouble anticipating second order effects. This burden should not be placed on those least qualified to perform it.

Edit: Thank you to everyone for your thoughtful replies. I find this topic fascinating to discuss.


> I find this new trend of trying to predict how technology will be used - and presuming its users will either be evil or too stupid to use it responsibly incredibly patronizing.

People who create something, who research something, should by definition have a better grasp on their topic than everyone else. To expect that they are more knowledgeable, and therefore better equipped to see what it means, and therefore also have a civic duty to explain this to less well-informed, not not patronizing. It's acknowledging their expertise.

> Engineers are not philosophers and should not be placed in this role.

This is an interview in Nature. Academics with the ambition to publish in academic journals, be they prestigious like Nature or Science, or smaller and more niche like <insert your favourite journal here>, are taking on a role of a philosopher.

And sure, engineers may "just" apply technology by comparison, but honestly I find that a very dismissive attitude as well. They are the "praxis" to academia's "theory", meaning that what they do actually has consequences in our lives. So if anything, their works comes with even more of a moral obligation to "Do The Right Thing" than that of academics.


> People who create something, who research something, should by definition have a better grasp on their topic than everyone else. To expect that they are more knowledgeable, and therefore better equipped to see what it means, and therefore also have a civic duty to explain this to less well-informed, not not patronizing. It's acknowledging their expertise.

They are experts in how the technology works. That doesn't make them an expert in how other humans will take advantage of what they have discovered/created. I've taken the charity of assuming the scientist or engineer themselves had non-evil aims.

> This is an interview in Nature. Academics with the ambition to publish in academic journals, be they prestigious like Nature or Science, or smaller and more niche like <insert your favourite journal here>, are taking on a role of a philosopher.

Yes, but a philosopher on which topic? A doctorate from an engineering background is not equally as educated as someone with degrees in history or other studies of human nature in regards to the societal effects of technology.


The goal of this proposal seems worthwhile but the implementation leaves a lot to be desired.

Humanity benefits when the broader impacts of technology are factored in. Who does this, with what influence, is arguably one of largest themes in human history. This particular proposal is just one of many options.

I don't find this article particularly compelling. Having a requirement for a social-impact section is a blunt instrument; it might become just a pro-forma exercise.

I think I would be more interested in encouraging (not requiring) journals with strong ethical foundations let authors opt-in to discuss social impacts. If they do, I would expect these social impacts to be vetted by social scientists, philosophers, ethicists, and so on -- areas where computer scientists often have large blind spots.


Doctorate of philosophy.

The Ph in PhD actually means something. It's not some kitschy decoration.


The Ph in PhD refers to natural philosophy, which diverged from moral philosophy early on.


How is natural philosophy different from moral philosophy?

I'm sure that comes across as a dumb question, but, modernity - technology.

The irony of mathematics being naturally built in, reinforced into all of us, over and over. Computer science.

From wikipedia:

> From the ancient world, starting with Aristotle, to the 19th century, the term "natural philosophy" was the common term used to describe the practice of studying nature. It was in the 19th century that the concept of "science" received its modern shape with new titles emerging such as "biology" and "biologist", "physics" and "physicist" among other technical fields and titles; institutions and communities were founded, and unprecedented applications to and interactions with other aspects of society and culture occurred.[1] Isaac Newton's book Philosophiae Naturalis Principia Mathematica (1687), whose title translates to "Mathematical Principles of Natural Philosophy", reflects the then-current use of the words "natural philosophy", akin to "systematic study of nature". Even in the 19th century, a treatise by Lord Kelvin and Peter Guthrie Tait, which helped define much of modern physics, was titled Treatise on Natural Philosophy (1867).

> "systematic study of nature"

And what do we do? What is nature to study? What is life to study?

Our own puzzled face, staring back at us.


[flagged]


Personal attacks will get you banned here. Please don't post like this again.

https://news.ycombinator.com/newsguidelines.html


Might want to develop some sensitivity and awareness to never being able to tell the difference between too much studying or not enough studying.

"Real studying". Figure out what it means when you don't have anyone to design tests to check your own knowledge for you.

Bongs. Is it fun to insult people for thinking?

Real studying only means something when you have a teacher and an educational institution. Real world is much harder. You can watch your own understanding loop around itself hundreds of thousands of times. And still not know, do I know anything at all?

How many times do you have to observe something before you accept it as fact. 1 time? 2 times, perhaps once indirectly? 3? Infinite? How many ways can a truth be conveyed, and how many ways can you be made to interpret it so you know the difference between being entrusted with an awareness, or whether you've been manipulated into accepting it.

Socrates, aka "I know that I know nothing", had 3 books written about him. Aristophanes, The Clouds - he was a bumbling fool. Ever think about why he appeared that way to different sets of people?


How many times do you have to observe something before you accept it as fact.

Post-modern philosophers will quite correctly argue that you can no very little about things just by observing them, and in fact can never get close to complete knowledge that way.


I don't think I would ever have argued with that.

Mechanical languages though, mechanical systems. Not particularly interested in quantum physics. That breaks things down so much to the point that everything can unfold into a somewhat distorted method of processing your own knowledge. Like trying to understand something from both yourself, and what is not yourself.

Mathematics. Logic. Those things can be accepted at face value. I think that's the closest one can get to complete knowledge. That's my opinion though.


I repeat my suggestion that you put down the bong. You're not making a coherent argument.


You don't need drugs to have an independent understanding that seems to be nonsensical to others. Just think a lot, mostly alone, think about what everyone says, think about everything you can read and find anywhere, and try to grasp some crystal clear understanding, but by the time you think you have it - gone - trying to explain the process to someone.

Actual studying. I'd say it's next to impossible to have a complete understanding about anything. You can always think you have the full understanding, but that's just very specific to whatever test or goal post designed for you, or you design for yourself.

Bertrand Russell wrote 370 pages of a book to prove 1 + 1 = 2.

Is that coherent enough?


I literally cannot understand the point you are making.

Are you referring to relativism vs. objectivsm? Or, that people can have the understanding of a concept, but lose it in trying to explain it?

I'm really trying in good faith you get you to make what I consider a coherent response. Personally, I've studied rhetoric, logic, philosophy (the moral kind), and science (the natural philosophy kind). I have a subtle enough knowledge of all of these to understand and appreciate nearly all points of view and have intelligent discussions with a wide range of people on a wide range of topics.

But, so far, nothing you said seems to contradict my statement that the P in PhD refers (with the exception of people getting PhDs in philosophy departments) to natural philosophy, which is to say, the quantitative, experiment and hypothesis based approach to understanding how the universe works, as opposed to moral philosophy, which is a primarily logic-oriented approach to abstract and concrete problems experienced by humans.

That those two intersect to some degree, and that some people who obtain PhDs in quantitative fields do spend some time being introspective about moral philosophy, but it's not commonly required in the form of coursework or research.

Perhaps the closest relationship between moral philsophy and natural (which is what we now call "science') is math, and it's still very much an undecided problem as to whether math and science are the same thing (https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.htm... and https://en.wikipedia.org/wiki/Mathematical_universe_hypothes... being the two most interesting statements on that problem) and whether the logic employed by philosphers is the same thing as mathematical logic. There is some useful prior writing on this, for example https://plato.stanford.edu/entries/philosophy-mathematics/

Ultimately though I just can't see what point you're making. I think you're saying all knowledge is relative, and even scientific/natural philosophy is identical to moral philosophy, rather than "all students of the natural sciences should also take classes in philosophy to develop a more intuitive understanding of epistemeology"


> I literally cannot understand the point you are making.

You kept insulting me. Why would I try to make a clear point?


> Doctorate of philosophy.

> The Ph in PhD actually means something. It's not some kitschy decoration.

In Germany, the doctor's degrees are quite different. Typical ones for the HN audience are (for a long list cf. https://de.wikipedia.org/w/index.php?title=Liste_akademische... ):

- "Dr.-Ing." for engineering (typically including Computer Science)

- "Dr. rer. nat." (doctor rerum naturalium) for natural sciences (typically also including mathematics). Even though the university of Frankfurt am Main issues the "Dr. phil. nat." (doctor philosophiae naturalis) title instead, the "nat." ending for the latter one is very important to distinguish it from the "humanities doctor's degree" "Dr. phil." (doctor philosophiae)

TLDR: In Germany, the names of the doctor's degrees make it very clear that they have nothing to do with philosophy.


I will not be convinced into believing a PhD represents less than what it does. They are absolutely fundamental to the retention, collection, understanding, and dispersion of knowledge. Otherwise intellectual understanding goes in circles. No progress.


PhD represents what it does, it's just something else than you seem to think it is. PhD is awarded for being a specialist in a very specific topic (+ perseverance in jumping through institutional hoops), not for being a specialist and having an extra understanding of sociology, or philosophy in the modern sense.


That's absolutely not what I was taught it meant, or learned, or came to discover.

It literally looks like anyone can change knowledge, override it, mutate it, destroy it, falsify it.

I don't know why most people get PhDs these days, you are correct. But I know what it used to mean, and I hope it means that one day again, soon.

In the real world you have to jump through hoops no matter what you do. Ignorance to them is stupid.

We live in society, that doesn't make everyone a sociologist. It just means everyone who lives in society should at least devote some portion of their thinking to the contemplation of society, including institutions of education. People in academia are not omnipotently protected from the forces of society. If society doesn't value these institutions, they go away.

The Ph in PhD is important. It really is, and it's idiotic to let that distinction be viewed as something that's just, obsessed with a pedantic, tiny niche of no real consequence.

I've considered myself an autodidact for much of my life. I've also had a great education (MSc, only worked on PhD for one year and left school due to health complications). Comparing these two worlds is hard, but teaching yourself when you can be taught by others is the hard, long, and painful way to learn stuff. And you never know what you know when you only have your own knowledge to trust, checking against everything else, with no real stable, structured, foundational, 'stand on the shoulders of giants' mechanic for the discovery, pursuit, and application of truth / knowledge.

You can have some idea that the individual mind independently can be related to the entire body of academia like it's some metaphor for their functioning as an organizational entity, a super colony of minds that function like one - tested against the rest of the world - but that's oversimplified. Our own puzzled face, staring back at us. We are all part of one big system, but some stuff, distinction really does matter. I don't talk to many people who have PhDs that I know of in real life, just one. He's the only person who seems to really understand the absolute delicate fragility of, and the consequential burden - that comes with the discovery, creation, pursuit, and retention of knowledge, at least that I know of, in real life.

How does that relate to having an extra understanding of sociology, or philosophy? I honestly think my point is built into the experience I am relating, or at least I have to hope I can still convey that much, correctly.

Philosophy should be important to society, and it's a shame when it's not.


But it has zero to do with philosophy in the modern sense. Wikipedia will easily tell you that "philosophy" in this context refers to an original Greek root and an archaic way of categorizing fields of knowledge.

https://en.wikipedia.org/wiki/Doctor_of_Philosophy


Since when did Wikipedia become the authority on what a PhD represents?

That's a little freaking ironic if you don't mind me saying. A PhD represents it's own authority by virtue of being awarded by the tiny fraction of people in the world who actually understand the problem space you might be studying enough to say "Research accepted. We trust you can take our place as educators and researchers in the future when we are dead."

Also, there's a good reason the Greeks had that archaic way of categorizing knowledge. Hyper-connectivity of all knowledge available at once is a total clusterfuck to deal with from an individual mind perspective. Just because it's old doesn't make it wrong. History repeats itself, etc.


> To expect that they are more knowledgeable, and therefore better equipped to see what it means...

At least in computing, this seems to have been largely false. Whatever my respect for the expertise of creators, their track record on predicting the future uses of their developments has been genuinely bad. I don't think that's an indictment of their skills, just an acknowledgement of how complex and rapidly-changing systems really work.

Understanding the future impact of ARPANET was impossible without also anticipating the rise of home computers, but the creators of ARPANET had no special expertise on that point. Computability experts and chip designers got predictions about topics from weather modeling to AI Go play totally wrong because everything from GPU computation to neural network advances have rendered their accurate domain knowledge misleading. And the track records of AI experts predicting the real-world role of AI is so infamous that I don't think I need to elaborate.

Honestly, I expect that this would be computer science's Modernist moment. Corbusier and friends were genuinely excellent architects, but the move from "designing on sound principles" to "trusting that we know all the future uses of buildings and designing for those" was disastrous. The result is a legacy of inflexible building design, hostile urban planning, and government-embraced block housing that caused real harm to the people it was inflicted on.

"A better grasp on their topic than everyone else" is presumably true, but it's not enough. Complex systems are often beyond what any one person can predict, and that only becomes more true when they're also rapidly changing from many directions. I don't think most creators in computer science can make predictions good enough to act on until their creations have been exposed to the world for a while.


> Honestly, I expect that this would be computer science's Modernist moment. Corbusier and friends were genuinely excellent architects, but the move from "designing on sound principles" to "trusting that we know all the future uses of buildings and designing for those" was disastrous.

First of all, I love me a good Le Corbusier bash, and I totally agree with you here. I also get where you're coming from.

However, what the article asks for is the equivalent to demanding that Le Corbusier would have to ask at every step "how could my design, supposedly built on sound principles, have negative consequences?" which isn't something I recall being a thing he used to do.

Of course it won't be enough, but one could argue that if the experts don't even bother to try, things will be even worse.


> demanding that Le Corbusier would have to ask at every step "how could my design, supposedly built on sound principles, have negative consequences?"

This is a good point, and I'm very much in favor of that. The man himself seems to have been much more fond of questions like "how can I force people to accept a version of this design that uses round numbers for elegance?"

I think the proposal made me jumpy because I have so many bad associations with asking technologists to explain the consequences of their work to others. It calls to mind things like The New Digital Age, which seemed totally devoid of introspection (and frankly stupid or even malicious in a lot of places) but was a huge hit with people making important choices about investment and regulation.

I'm still very hesitant about things like the mention of public funding boards using these assessments to shape their decisions, but I appreciate your point about preemptively looking for possible negative outcomes. Something like a medical attitude that considers failures and negative events separately could potentially make a lot of sense. We live in a world where the UK police are using facial-recognition software which provides 95% false positives, and it would be nice to at least have an academic framework for talking about how that's not just ineffective but actively bad.


I understand what you mean. Keep in mind this is intended for the context of peer review; an ultimately transparent discussion, but not directly answering to the public. Ideally it would become another form of constructive criticism among scientists.


that would be great if our engineering education system placed any real value on humanities or social sciences. They don’t. Most of the engineers I know aren’t even remotely equipped to work out how their products might impact society, and to be honest even fewer actually care enough to change their behavior or take risky positions in front of their bosses.


>Engineers are not philosophers

Hold your horses, some of us engineers have philosophy degrees!

In all seriousness, engineers are more than capable of considering the impact of their creations. But can you predict future uses for technology with accuracy? Of course not.

I'm not sure how relevant it is for most work. People will always find ways to use technology to the detriment of humanity, but the overall effect of technology is beneficial. The internet is obviously for the benefit of humanity, yet there are plenty of people using it for evil.


If a person says "Engineers are not philosophers", the common interpretation is not "All engineers are not philosophers", it's "Nearly all engineers are not philosophers".

The small set of engineers with philosophy degrees does not negate the parent's statement.


Did you just pick a formal logic debate with a philosopher? I disagree with your interpretation of the language used. The comment was authoritative in tone generally, and this line said (in my estimation) that the argument was "if someone is an engineer, they are not a philosopher". Even worse, there's a suggestion that this philosophical work shouldn't burden those "least qualified to perform it". Leaving the philosophy to "better men" is how we justify evils committed by underlings and evokes thoughts of the Nuremberg trials. Every person should come to terms with the moral implications of their actions.

In any case, one doesn't need a philosophy degree to do philosophy, and many of the great engineers and scientists of history had a great deal to say with regard to the moral implications of their work.

Even today, engineers at many of the big tech companies are objecting to some work their companies are doing. Many others choose not to work for companies whose mission they disagree with.


No formal logic debate here; I was explaining colloquial use.


Then don't call yourself an engineer. Engineers build bridges, airplanes and when those fail, engineers are investigated for culpability. If the failure is due to a design flaw or cut corners, engineers are held responsible. I say this as a computer scientist with an undergrad in eng and a PE.


Those are first order effects connected directly to the immediate aim of allowing transport over an obstacle. A civil engineer does not consider "who" will cross or what their aims will be. For example a bridge may allow an enemy army to move their tanks faster during wartime.

Similarly tech shouldn't have to consider whether someone with unrelated goals will use their tech for unwanted aims.


If you know your software will increase emissions to illegal levels, and still write it, you are culpable.

If you write software for autonomous cars that is not foolproof at detecting pedestrians, you are culpable. In aviation, catastrophic failure means at least one death and the system should have an overall probability of failure at 10^-9. Critical software is frequently written at one or two orders below. That means 95%, 99%, 99.9%, are not enough.


What are the limits of culpability and what are you responsible for? The article is pushing that definition beyond the first order effects of safety. And the proposed solution is that gate keeping is done by the press and the public. How well do you think that will go?

If you write software that replaces someone's job are you "culpable"? If you create a phone and people are killed when someone texts and drives are you "culpable"? If you create a better facial recognition program as part of building a secure access systems and someone uses that system to discriminate are you "culpable"?

I find this unending process of creating "victims" just so they can be "saved" as tiresome.


I find engineers and entrepreneurs who shirk their responsibilities as moral agents tiresome.

If a thief comes and steals money under the mattress meant to pay for your kid's college tuition, and your kid is dropped because she can no longer pay, your kid's loss is a second order effect of the thief's theft, but I still hold the thief responsible*

* this is a hypothetical story; please don't save that kind of money under your mattress.


Your argument has some basis in law and I do find it compelling. The nuance to my argument is that the original creator did not have nefarious aims. I believe that does make a difference.


I think that's fair. I like to consider three factors when considering an original creator:

1. Did they have nefarious aims?

2. Did they reasonably attempt to consider and solve issues surrounding first and second-order effects of their creation?

3. How did they react to previously unknown first and second-order harms when they found out about them?

The first two, I tend to give creators the benefit of the doubt about. But I've become quite dismayed by the responses of creators to the harms of their creations.


I assume the common response to #3 is that the benefits outweigh the costs. Do you not consider that a compelling argument?


Benefits to whom? The engineer and entrepreneur? Or the public at large?


Benefits to management when the public actually believes the culpable will be punished when shit hits the fan.

Tell me, in the VW emissions scandal, did that happen ? No.

Germany literally changed the laws retroactively to avoid holding their own companies to account. This is technically illegal, but the judicial system REFUSED TO LOOK AT IT.

Given that both management of these firms, and government wants to commit fraud when culpability is assigned ... what point is there discussing the rules ? They won't be applied when it matters.

But of course there are large advantages to people believing they will be applied. Like there are advantages to people not responding to having their wallets taken from their pockets, their children kidnapped for sale into abuse, their houses robbed empty ...

This is the US government in action. Other governments, including European, are worse, not better. These are the people that make, and enforce the rules. Why are we discussing whether the right things will be happening ?

https://www.youtube.com/watch?v=eqBAOX6Qegk


And if her college boyfriend becomes depressed after she leaves and loses it, gets drunk, and hits someone crossing the road while driving, I still hold the thief to have some moral culpability. Moral culpability is shared and distributed across moral agents in time and space, as some function of causal distance and intent.

It is the second word, intent, that is the hang up. If a person hurts someone but it was not their intent to do so, that person is not typically to be held morally responsible. So, at least in the western concept of moral culpability, intent matters. But... people also have a responsibility, a responsibility to try to anticipate the consequences of their actions. One cannot shield oneself from culpability by intentionally remaining ignorant. I think that engineers avoid thinking about these things, not out of humility, but because its in their interest to do so.


> Moral culpability is shared and distributed across moral agents in time and space, as some function of causal distance and intent.

Absolutely agree.

> I think that engineers avoid thinking about these things, not out of humility, but because its in their interest to do so.

This is perhaps the reason we differ. My experience has been that the vast majority of technologists want to make the world a better place.

I believe you see the industry is filled more with people like Wernher von Braun who want to build something cool regardless of social cost. At the dawn of the social media revolution the dominate theory in tech circles was that information shall set you free. The people making it did not come into the industry for status, being a nerd was still a social negative. Today with the ever rising salaries things are changing.

Perhaps you are right that as the industry matures the sphere of liability should increase. I would still argue that the suggestions in the original article swing that much too far. We shouldn't prevent scientific discoveries merely because engineers may build something bad with the information. On the balance scientific discovery has improved the lives of everyone by many orders of magnitude - even if its distributed unevenly.


That's what standards are for. The threshold is determined by harm. Software practices are still in their infancy, which is is why it is hard to take them seriously as mature disciplines. Now that software is having a direct impact on people's lives on a large scale, it's time to set some standards, starting with autonomous driving and mass surveillance. Then have people trained to understand those standards and adhere to them as best practices.


If you do it knowingly yes...you have the ability to say no. Do you think you are not a victim, or are you just complacent enough that as long as you have X you are fine and I can going knowing I was a good person. You dont have to create victims you do have an obligation to prevent them.


> If you know your software will increase emissions to illegal levels, and still write it, you are culpable.

Again a first order effect

> If you write software for autonomous cars that is not foolproof at detecting pedestrians, you are culpable.

Again a first order effect connected directly with the immediate goal.

However if you write communications software and people use it to say bad things, those are second order effects. You provided communication software that works as advertised and has beneficial purposes. Others misused it.


Is the guy who invented valves responsible for the Holocaust?

Not trying to be inflammatory. This is specifically trying to find if you think there's a point where culpability disappears. I would appreciate a yes/no answer. There's no nuance to the question. I'm not talking about the specific valves, just the notion of a valve.


Do you understand the difference between a first order effect and a second order effect?


I don't (btw, I'm a software engineer and scientist with extensive training). So I looked it up.

Did you mean first order or second order kind of like the way physics people mean it (direct linear relationship with strong effect on the unknown variable). Or the way that MBAs mean it (they have their own definition).

Or did you mean structural engineering, same as physics: https://www.quora.com/In-structural-engineering-what-are-fir...

I think what you probably mean is a colloquial interpretation, where first order means something is directly causal in an obvious to see way, where second order is more of an indirect effect that occurs through complex dynamics.

Either way, I don't think those terms are well defined, and they're kind of complex enough that using them to argue for morality, and calling people out for not knowing your definitions, makes you look caddish.


I agree.

I doubt that they could mean causally indirect. I think it must mean something like: not used in accordance to their the engineer's intent, especially when that intent covers a a general affordance (e.g. affords communication over long distances), but is being judged for a more specific affordance (affords communication over long distances to order a terror attack). I dunno. I'd have to be a philosopher to work it out.


I think that's spot on. When people say something is a first/ second/nth, order effect they are really just arbitrarily drawing discrete lines in a continuous chain of causality.

The problem of where to draw those lines is so difficult that legal systems resort to constructing tests based on whether a "reasonable person" could have foreseen the specific consequence of an act.


Those are also, as the OP has repeatedly addressed: first order effects.


Yeah but that’s a different argument to what you’re replying to.

If that civil engineer sold or allowed (for any reason from negligence, insufficient security protocols, malice, ignorance etc) blueprints for his design to end up in enemy hands, he most certainly would be held liable.

The thrust of that point is the very idea that software engineers on projects involving user data, regardless of how and what they think they are building, are not simply building “bridges”. They are building systems that have the potential to be misused as mass-surveillance apparatuses. And they need to build their systems accordingly, with appropriate data-security and with acknowledgement that their work could leave them personally liable to action should their work be found to be used as a threat-vector towards their users.

The current attitude of “well people know what they’re signing up for and since they signed the User Agreement, we don’t have any liability here” towards users in the industry is sick.


I literally signed up for HN, something I've been uninterested in doing, just because of this comment.

Saying that engineers need to make responsible decisions in their software design is one thing. But you imply that misuse is a liability for the designer? So if someone takes an engineer's code and modifies it for malicious intent the engineer is liable? That's absurd.

That's like saying the person who designed the airplanes used in 9/11 are partially liable for 9/11.


Absolutely, if that data is stolen due to the designers’ neglect in the security aspects of the systems they signed off as “adequately designed”.


Yup! It just concerns me since this topic has a lot of great discussion but many people are missing the distinction between different types of effects. Sorry if I misunderstood what you were saying! For others who are confused what I mean...

Consider:

The person who makes a car that someone uses to kill someone by running a bystander over. The engineer isn't responsible.

The person who messed up the Prius with the issues where the gas pedal or whatever didn't work. The engineer IS responsible.

The first example is analogous to a software engineer writing image recognition software for a safety app. Then someone maliciously taking the software and using it to discriminate via misuse. The engineer is NOT responsible. If someone takes your software (could even be open source) and applies it or modifies it in a malicious way they are responsible not the engineer.

The second is example is where if an engineer makes a widget app and has a vulnerability but signs off on it, and then widget purchaser credit card info is stolen. The engineer is responsible for negligence and should be liable for the misuse.


No, yeah sorry, I’m obviously being super brief here to just get the point across without writing a thesis on my phone :)

Although, predictably enough, I followed up with more thesis length posts to clarify my point! Namedropping einstein too, who do I think I’m impressing? (But Einstein is relevant to this debate, although he made the crucial mistake of only informing the government and not informing the public. Obviously this was well-intentioned on his part and not malicious at all but it’s a lesson that every developer of new innovations needs to take onboard.)

This isn’t about specifically demonizing “engineers” or innovation. Or even to do with liability where “that engineer developed a car that murdered someone”. Liability laws are robust enough to figure out whether the gas pedal was functioning correctly, the driver is at fault, or if not, the engineer (or rather, the company he works for) is at fault.

This debate is about broader strokes: should developers of new technologies be ethically bound to inform both politicians & the public about their innovations, specifically so that, after a public debate on the potential negative consequences of that technology, the public can then demand their politicians to produce robust laws protecting the public from misuse of those technologies. (I say yes, I know it’s tough because I love just developing cool new shit too, but we developers have got to take a step back and realize the future implications of what we make. Data gathering, manipulation and retention is a ticking time bomb. And who knows what’s next.)

Another example is facial recognition, it’s depressingly funny to see experts who know so much about the technology proclaim it to be no big deal without understanding the ramifications of living in a society where your every movement is tracked and potentially monitored. That is a literal police state, and the US constitution has amendments made hundreds of years ago specifically to outlaw such actions to protect the public at large, both from private companies tracking their movements and to protect the public from the government itself. To have all of that progress in the name of “facial recognition will be awesome because you can login to your phone quicker” is not a good move because once that technology is in the wild, until laws catch up to robustly protect the public, who knows what will be done to the public at large with that technology. Ditto for “who cares about data retention and website tracking, it makes marketing so much easier!” and “don’t worry about your DNA being stored and sold by ancestry sites indefinitely because it’s so cool to know that you’ve got a 7th cousin living on a different continent”.


> engineers are investigated for culpability

Funny how you believe that. Remember the Miami bridge collapse ?

Workers on the ground misplaced one of the supports, and then didn't stop work when loud noises started coming out of the concrete. This was not documented wrong on the design, it was those same companies first mismeasuring the location of the supports, then simply moving them around when they didn't fit.

Who's culpable ? The designers, of course.

Or take Dieselgate. European, German and French car manufacturers systematically and purposefully installed software in the cars to commit fraud on the testers (the government testers knew that this was happening).

Who's culpable ? A single engineer, of course.

Not the regulators that knew what was happening.

Not the European investment bank, who had seen reports on this.

Not Volkswagen management, that had built several facilities explicitly for committing this fraud.

Let's stop pretending the problem with engineering will be solved by assigning blame to engineers. It won't. It can't.

Large scale fraud, committed by management and government is the problem. It cannot be fixed with engineering.


What!? Professional engineers do consider the greater good of society in their code of ethics. Introductory courses in ethics are a mandatory part of the curriculum where I’m from.


And yet that is not even close to the hand-wringing being discussed in the article:

"In my first ever AI class, we learned about how a system had been developed to automate something that had previously been a person’s job. Everyone said, “Isn’t this amazing?” — but I was concerned about who did these jobs. It stuck with me that no one else’s ears perked up at the significant downside to this very cool invention."

"Then, the gatekeepers are the press, and it’s up to them to ask what the negative impacts of the technology are."

Building an entire society of victims and allowing the hysterical press, who are in the business of feeding a hysterical public sounds like a formula for paralysis.


I'm appalled at the lack of ethics in civil and mechanical engineering!

Automotive engineers willfully destroyed opportunity for stablehands, whip manufacturers, and train conductors out of hundreds of other occupations annihilated.

Civil engineers with their oriented strand board trusses, reinforced concrete, and dimensional lumber stole the livelihoods of masons, carpenters, foresters and so many more.

Engineers have centuries of unethical practice to atone for. They need to stop their work because everything that they do is harmful and unethical!


I guess a little humility is warranted. But I think your attitude is really about letting engineers shirk their responsibility as moral agents. You know, sliiiiide by, as they've always done. Take responsibility.


> But I think your attitude is really about letting engineers shirk their responsibility as moral agents.

I absolutely disagree. That is why I specifically mentioned culpability for first order effects. If you build a gun for someone you know to be violent you have culpability for its use. What I disagree with is the broadening of this "liability" to include second order effects. That we are now supposed to predict chaotic systems and know in advance all perverse outcomes - and that we should have moral liability should we fail in this task.


we have moral liability for second order effects (whatever those are)


An abuser drives over a bridge to reach his victim's house, where he commits an act of domestic violence.

You've committed to the position that the engineer who built the bridge is morally culpable for that domestic violence.


Perhaps, but is Anubis' scales, the good is weighed against the evil that a man has done. Cannot the engineer be culpable for facilitating a crime, and yet be also culpable for improving the lives of many thousands of others, and in balance the good outweighs the bad? Culpability is not an absolute. The engineer may be culpable, but how culpable? Relative to the culpability of the one who held the knife? Relative to the man's parents who suspected his intent but failed to stop him? Relative to a society which has decided that it needn't guarantee access to mental health care that the man required? Culpability is distributed, and it isn't all or nothing.


You stated that we have moral culpability for second-order effects, whatever those are. If a person engineers a bridge that works as designed, what would the second-order effects be?


"whatever those are" was a poor attempt to indicate that I wasn't sure what people mean by second order effect. was it that something that was not intended, or is it that it happens further in the causal chain, or is it something about particular affordances and general ones (a bridge affords crossing. a bridge that affords crossing both affords crossing a bridge to buy your groceries and crossing a birdge to commit a crime.) Or is it just things that are hard to predict might occur? Or things that have an intervening autonomous moral agent between the engineer and the victim? its not clear to me.

A man-made dam might increase a city's availabe water supply and allow power generation, but it also can increase CO2 emissions and do harm to salmon reproduction. Increased CO2 emissions and detriments to salmon reproductions might contribute to global warming in the first case and an increased price on fresh fish and a declining seal population in the second. Increased price of fresh fish mean less availability to poor consumers and the loss of what might be an otherwise valuable part of diet. Global warming means climage change, sea level rise, an expected increase in unrest, and large scale ecological instability. All those things mean that my daughters may not enjoy the relatively secure life that my generation has led...


On what basis?


on what basis are we culpable for anything? if you can answer that I question I'll try to answer yours. not trying to be flippant (we've had a good discussion so far), but I don't really have an answer, except to say that I don't know on what basis we might distinguish the two cases. if you can answer me what makes someone morally culpable of anything, maybe I'd have a clue.


The basis for my arguments are from English law. Who is guilty of what has been fine tuned over centuries. It's not enough that something bad happened. There must be either intent to harm or gross negligence.

The modern trend both in law and society of statutory offenses and ex post facto "something bad happened so let's see what we can charge him with" is something I find deeply disturbing. I find the original articles proposals as yet another step upon that road.

Our forefathers accepted the universe was unpredictable. We seem to feel we have enough grasp now that if bad things happen someone must be to blame.


[flagged]


you are just making a point, but I'll take it seriously for a moment.

I think that handles can be provocative, and some are certainly chosen to do psychological harm (e.g. to racial minorities). If I had known you, and known that you would be triggered by this name, and that the harm was substantial, I would have some culpability, especially if I chose it with the intent of triggering you. But I didn't know you, and I didn't know that you would be triggered, and I did not intend to trigger you, and I had no way of knowing it would trigger anyone, nor a reasonable expectation that it would. And so there is really no culpability.


This gets to the core of our disagreement. I don't believe second order effects (i.e. things that involve another human decision maker using your creation) to be knowable in the general case. The exception being if you specifically advertise it for that purpose or similarity to existing tools makes it obvious.

In the case of a scientist - the ultimate effects of their discovery will always be second order as they are not the engineers applying the science.


but the second order effects of triggering you, such as you being a jerk to someone, is something that I would have at least some culpability for. its ... complicated. but we still have to try to think about what we are doing


> Engineers are not philosophers and should not be placed in this role. We do not have the tools to do it. As with any human we have a duty to consider first order effects - but even the most astute philosopher will have trouble anticipating second order effects. This burden should not be placed on those least qualified to perform it.

Here are two rebuttals to the above statement.

1. At the level of projects it's asking engineers to think a bit more broadly about solutions. For instance, when designing a user comment service for social media you can trade-off between designs that encourage informed debate vs. ranting flame wars. In many cases both designs have similar implementation costs and meet business requirements, so why not pick the one that is more socially responsible? As these choices become better understood they should just be part of user experience design.

2.) To develop the trade-offs we should address more research to social consequences of technology and take the results seriously. Great engineers are philosophers or at least more broad-minded about the interactions between technology and society than some of the current leaders in Silicon Valley. Andy Grove, David Hewlett, Bill Packard, and Bill Gates were/are not just problem solvers but examples of engineers who have thought about how to benefit society as a whole.


> Engineers are not philosophers and should not be placed in this role.

Which is how we wind up with companies hell-bent on vacuuming up as much data as possible and selling it to anyone who asks. Perhaps if some software developers (or, at this point, some legislators) stopped to think about the ethical considerations of the tech industry’s actions, we all wouldn’t have to have conversations about how other countries can buy elections on the same platform where you share pictures of your food.


You're kidding yourself if you think that electioneering and lobbying are a new thing. Look at Saudi role in promoting Wahhabism, religious fundimentalists worsening AIDs epidemics and promoting homophobia by pushing bullshit on the third world for one alone let alone actual CIA or KGB influence. Before people were falling for literal fake news they were falling for Satanic Ritual Panic because the truth wasn't what they were concerned about but how it made them feel.

Tech was never the issue here - people were. The drive to scapegoat the upstarts should be regarded with great suspicion. While there may be questionable ethics involved with companies it looks to be a classic attempt to scapegoat the troublesome for matters that are peripheral at best and any action they could have taken would have dissatisfied people like how both the left and right are convinced Facebook is biased against them because they don't reflect the world as they wish to see it.


I have a slightly different take: engineers shouldn't be blamed for negative outcomes that propagate down from the political and cultural levels.

For instance, machine learning is now widely used for surveillance and advertising in ways that violate basic presumptions about privacy rights. Thing is, I imagine that if, say, ML researchers had gone on strike over privacy issues somehow, society would have essentially told them to shut up and get back to work. Sure, the same larger society now wants to blame ML for the constriction of privacy and for extensive state and private surveillance networks, but it was that very society which cheered the construction of these exact surveillance networks "becuz terrorizm" and for the sake of "free speech" (for billionaire multinational corporations) and for other awful reasons.

The people who were skeptical or critical over this whole social trend knew damn well it was all a bad idea, and we said so. Society deserves to get what it voted for, with both its ballot box and its dollars, and on some level, it deserves to get it good and hard without scapegoating the hired help, when it was them in the first place who told us skeptical, critical engineers and scientists to shut up and produce what society wanted from us.


I agree but have a bit different take...

"For instance, machine learning is now widely used for surveillance and advertising in ways that violate basic presumptions about privacy rights."

Those same mathematical techniques are also being used to more accurately detect cancer in MRI scans. Saying everyone needs to be held responsible for second order effects is unrealistic and ultimately damaging to society. Blaming the toolmaker for things they cannot control will only cause no tools to be made.

To me it points to a larger issue in our society that we are on a trend of completely removing context. Context no longer matters, what matters are the effects, regardless of whether you can reasonably be held accountable for them or not. It is thought-crime induced by those who in the same breath rail against that totalitarian behavior and then skip to implementing a version of it themselves.

A number of people on this thread immediately jump to the argument on surveillance and data privacy which is a useful discussion but tangential to what is being proposed in the original article. The article is moral hand-wringing with zero clarity or practicality. And the judge of who is culpable or not is left to a hysterical press, driven by a hysterical and sometimes uninformed public.

Look at what this behavior has done to political (and personal) discourse. Now imagine it being applied to science. No thank you.


>Those same mathematical techniques are also being used to more accurately detect cancer in MRI scans. Saying everyone needs to be held responsible for second order effects is unrealistic and ultimately damaging to society. Blaming the toolmaker for things they cannot control will only cause no tools to be made.

As a matter of fact, my work covers fMRI scans of brains for a neuroscience lab, so I'm entirely with you here, but again, with a small caveat:

If society wanted the "toolmakers" (us) to take responsibility for which tools we make and how others use those tools, they would have to give us the actual power to decide what we work on and for whom. Instead the rule has usually been, "if you have ethical qualms about this, we'll find someone who doesn't". All exit and no voice means a society where nobody takes responsibility because everyone self-selects into positions where they can agree with what they're doing, and if broad masses and the hysterical press want to excoriate those doings while also materially demanding they continue, it gets to do so.

It's like when people say that engineers should refuse to work on weapons that would violate the Geneva Conventions -- but the same people then vote for a government that moves funding from the NSF to the Defense Department!


There are more knowledge creators/workers than ever before. I don't think that expecting more information from any given study (especially if it is applied research) is detrimental to society or the study itself.

It's about raising the standards because we can. Standard publication sections: intro, material and methods, results, discussion, conclusion. It feels like you can add a "speculative impact" section somewhere between discussion and conclusion.


Computer Science is a new field and individual developers can have reaching effects on society, direct or indirect, that extend further than any other engineering discipline. My belief is that it 100% is on the responsibility of researchers and developers to think of the ethical and social implications of their work. As an example, a podcast I listened to recently was interviewing CS researchers working on "deep fake" videos. When asked if they thought their work could have negative societal impact, their response was along the lines of "its just really cool to make this". CS as a whole has a lot of maturing to do and I hope we can get their sooner rather than later.


That's why diverse teams are important. You're right, we can't expect a CS major to understand social impact, but if you're working on AI or weapons out some other high-risk project there should be someone in the team who does.


I have to agree. I think that calling it "patronizing" is harsh but science generally does not progress on a timeline that depends on and adapts to humans - it is the other way around. And whereas history may judge the oil industry negatively it also judges theocratic regimes negatively for trying to "mitigate" progress.


The story goes that Thamus said many things to Theuth in praise or blame of the various arts, which it would take too long to repeat; but when they came to the letters, [274e] “This invention, O king,” said Theuth, “will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered.” But Thamus replied, “Most ingenious Theuth, one man has the ability to beget arts, but the ability to judge of their usefulness or harmfulness to their users belongs to another; [275a] and now you, who are the father of letters, have been led by your affection to ascribe to them a power the opposite of that which they really possess.

Socrates, From Plato's dialogue Phaedrus 14, 274c-275b

http://www.historyofinformation.com/expanded.php?id=3894


Totally agree. And this is not new at all.

All you have to do is read the history of engineers and scientists who worked on nukes or rocketry. The ones that tried failed miserably. Understanding why is crucial in understanding the limitations of domain experts.


There is a big difference between computer science and computer engineering.

Science is about understanding. Engineering is about using that understanding for building.


Nobody would be discussing the moral implications of scientific discoveries if it didn't ultimately lead to machines that cause effect in the real world. Science and Engineering are inextricably linked in this discussion.


> Science and Engineering are inextricably linked in this discussion.

Of course they are, but this article is very specifically about the role of the scientist. Not the engineer.

Will negative disclosure cause problems for engineers? I don't know. What I do know, is it isn't fair to place that burden of knowledge entirely on scientists while tying their hands behind their back with respect to open discussion in a professional setting. Academic publishing is intended to be open without fear of professional and personal repercussion for increasing factual awareness. Get rid of that and, well, sit back, relax, and I hope we all can enjoy the apocalypse of knowledge, because that's where stuff is headed without the ability to professionally discuss existing problems.

> Nobody would be discussing the moral implications of scientific discoveries if it didn't ultimately lead to machines that cause effect in the real world.

That's just factually wrong. Mathematics. Philosophers. Logicians. All the living and dead theorists who built the abstract foundations essential for the functioning of our machines. People who deal entirely in the world of the abstract. You don't develop a passion for abstraction because you want something out of reality. You develop a passion for abstraction because reality is sometimes terrible.


> That's just factually wrong. Mathematics. Philosophers. Logicians. All the living and dead theorists who built the abstract foundations essential for the functioning of our machines. People who deal entirely in the world of the abstract. You don't develop a passion for abstraction because you want something out of reality. You develop a passion for abstraction because reality is sometimes terrible.

We don't have moral discussions about the discovery of a new law of physics or a mathematical lemma. This only becomes an issue when we can see how it can be applied to some aim. Thus if there were no engineers nobody would consider the scientists moral obligations with the exception of their experiments themselves causing harm.

The argument to limiting science is that engineers might take it and create some terrible machine with it. Perhaps the most notorious example being a nuclear bomb.

I believe we agree on the role of science, I don't think we agree on what their culpability should be.


> We don't have moral discussions about the discovery of a new law of physics or a mathematical lemma.

Abstraction affects mental functionality. It forms expectations by designing a language in which the present can be expected to be predicted within some objective, measurable tolerance. The problem, always has been, and always will be - is that people can't be measured and tested like machines.

> This only becomes an issue when we can see how it can be applied to some aim.

So yes. That's correct, that's the terrible thing that can be observed and measured when it happens. Prevention is important.

> Thus if there were no engineers nobody would consider the scientists moral obligations with the exception of their experiments themselves causing harm.

Sorry, no, that again, ridiculous. You think that way. Not everyone thinks the same. Intent. Experimentation is intended to discover in a controlled environment while giving heavy weight to moral implications. People choose to opt in to scientific studies. People trust that process when they make that decision.

> The argument to limiting science is that engineers might take it and create some terrible machine with it. Perhaps the most notorious example being a nuclear bomb.

Yes, and what makes us think we are so evolved and so advanced to believe we can't accidentally create something similar with present technology - lacking awareness to it's creation? Evolution won't protect us against our own idiocy.

Reasoning about computation affects cognition, which has real world consequences that extend to everything else that is not reasoning about computation. That's just all there is to it.

> I believe we agree on the role of science, I don't think we agree on what their culpability should be.

They should be able to talk about personal, first hand experiences openly without being professionally attacked, first. Second, they should be able to talk about existing problems without fear of whose bottom line it might affect. There should not be blame in systems where everything is hard as heck to understand. Obligation to the truth - yes. Blame for the belief that there was the intent to create a problem by virtue of uncovering one? No. That has shortsightedness written all over it.


I would like to engage more substantively with you, but I don't think I understand where you stand on this issue. Do you agree or disagree with the suggestions in the original article?


> Brent Hecht has a controversial proposal: the computer-science community should change its peer-review process to ensure that researchers disclose any possible negative societal consequences of their work in papers, or risk rejection.

Agreed.

> The idea is not to try to predict the future, but, on the basis of the literature, to identify the expected side effects or unintended uses of this technology.

Agreed.

> No, we’re not saying they should reject a paper with extensive negative impacts — just that all negative impacts should be disclosed.

Agreed.

> But we’re moving towards a more iterative, dialogue-based process of review, and reviewers would need to cite rigorous reasons for their concerns, so I don’t think that should be much of a worry. If a few papers get rejected and resubmitted six months later and, as a result, our field has an arc of innovation towards positive impact, then I’m not too worried. Another critique was that it’s so hard to predict impacts that we shouldn’t even try. We all agree it’s hard and that we’re going to miss tonnes of them, but even if we catch just 1% or 5%, it’s worth it.

Agreed.

> We need to be saying, based on existing evidence, what is the confidence that a given innovation will have a side effect? And if it’s above a certain threshold, we need to talk about it.

Agreed.

> We believe that in most cases, no changes are necessary for any peer reviewers to adopt our recommendations — it is already in their existing mandate to ensure intellectual rigour in all parts of the paper. It’s just that this aspect of the mandate dramatically underused.

Agreed.

> Then, the gatekeepers are the press, and it’s up to them to ask what the negative impacts of the technology are.

Agreed.

> Disclosing negative impacts is not just an end in itself, but a public statement of new problems that need to be solved. We need to bend the incentives in computer science towards making the net impact of innovations positive. When we retire will we tell our grandchildren, like those in the oil and gas industry: “We were just developing products and doing what we were told”? Or can we be the generation that finally took the reins on computing innovation and guided it towards positive impact?

Agreed.

I haven't read this article (https://acm-fca.org/2018/03/29/negativeimpacts/) in entirety but I agree with the bolded points. Only viewed bolded points up to the examples.

--

Edit: I want to say thank you for your respectful tone in spite of my slightly aggressive(?) one. I'm grateful to people like you because it reminds me, it matters a lot.


> The idea is not to try to predict the future, but, on the basis of the literature, to identify the expected side effects or unintended uses of this technology.

What will you do with this information?

> No, we’re not saying they should reject a paper with extensive negative impacts — just that all negative impacts should be disclosed.

So at best its useless, and at worst it serves to fuel the imaginations of those who would use it for evil.

> our field has an arc of innovation towards positive impact

How would this change the course of innovation? The only way I can see is if its used to censor.

> We need to be saying, based on existing evidence, what is the confidence that a given innovation will have a side effect? And if it’s above a certain threshold, we need to talk about it.

Why is a specialist in a specific technology qualified to determine negative societal outcomes of it? Human societies are extremely complex systems and even those specialized in it have poor predictive track records.

> Then, the gatekeepers are the press, and it’s up to them to ask what the negative impacts of the technology are.

Why would the press have this power? They've never had such substantial control before. Governments have been able to suppress independent nuclear research in the most extreme case but the laws surrounding that are not exactly proven. If we barely allow our government this power why should we give it wholesale to an unelected press?


First, this is hypothetical.

> What will you do with this information?

If I was in the position of a researcher, I would talk to people it affects directly or try to communicate my awareness to people who are in a similar interdisciplinary dilemma who can then communicate their awareness to people who are 'on the ground' such that the problems both researchers have awareness of can be identified and then resolved in real life. The point is progress, one direction that moves forwards, not backwards or cycles. Obviously this can be a state of mind one can choose to see or ignore - see the cycles only or notice the differences with positive improvement universally in mind. But I'm one person, stuff like this will always be is a group effort.

> So at best its useless, and at worst it serves to fuel the imaginations of those who would use it for evil.

Group effort, squish evil. Evil is not something I know how to identify though, that's always been the dilemma. The real evil seems to be the thing that actually completely extinguishes your existence entirely, death(?).

> How would this change the course of innovation? The only way I can see is if its used to censor.

I don't think the intent is to censor, I think the intent is to alleviate existing censoring of problems people are aware of but have no way of speaking about.

> Why is a specialist in a specific technology qualified to determine negative societal outcomes of it? Human societies are extremely complex systems and even those specialized in it have poor predictive track records.

Maybe they have some first hand experience that they feel they would be embarrassed, ashamed, or rejected from disclosing that information because the association of being personally identified with a given experience is just something not accepted, presently, professionally. Relegated to therapy, even though, there might be actual problems people have some awareness of. Fear of being implicated as being the cause rather than the solution.

> Why would the press have this power? They've never had such substantial control before. Governments have been able to suppress independent nuclear research in the most extreme case but the laws surrounding that are not exactly proven. If we barely allow our government this power why should we give it wholesale to an unelected press?

It's not power. It's trust. It's saying, researcher talking to reporter, 'hey if we make mistakes, point them out, because we won't always be able to see every error we make, especially when we must work in a system that requires minimal error to be seen as a function of our own approval mechanic.'. Errors in knowledge systems that function off of principles that are resonant of their own structure - that leads to destabilization of structure. There has to be some tolerance for making error, that's give and take, you see me make errors, I don't see myself making them, point them out. That's the trust given to the press. Speak up.

> If we barely allow our government this power why should we give it wholesale to an unelected press?

Bodies that self regulate must regulate one another. Checks and balances.


I disagree because part of the job of engineering is stake holder analysis, where engineers consider all the groups that will be affected by a system before developing requirements of the system. This is one of the things that differentiates engineers from technicians. For CS to be more than a field of coders they should consider the larger impact of their work.


I agree that it's unlikely that anyone can predict broad, long-term consequences of any particular technology. On the other hand, the people who design Facebook or mobile games to be maximally addicting definitely understand something about how their software is affecting society. It's not a huge leap to go from "I'm pretty sure that certain people will become obsessed with this," to "It's probably harmful for lots of people to become obsessed with this."

If a person can predict the consequences of design decisions well enough to get paid for it, they'll probably have some idea of potential negative consequences as well.


There may be cases where negative consequences can be predicted, but this sure isn't it. Of all the predictions I heard about Facebook when it was new, "fake news" and "depression from comparing yourself to friends out having fun" and "an increasingly clickbaity and outrage-stoking newsmedia" were not among them. Not even close. Facebook is a great example of how programmers are not good at predicting negative consequences. The predictions I heard were mostly based on comparisons with Myspace and Livejournal. We are not good at this. Plumbers shouldn't try to do your electrical wiring, doctors shouldn't try to manage your database, legislators shouldn't try to pick your programming language, and programmers should not try to determine how society will react to technology.


I think you (and some other commenters here) are making the mistake of saying that because we cannot predict the future perfectly, that we are absolved of any responsibility for thinking about negative outcomes.

> programmers should not try to determine how society will react to technology

That is exactly one of the most important parts of our job.

Do you think doctors should not think about the wider impact of treatment decisions? Legislators should not think about the second-order effects of the laws they pass?

Yes, prediction is difficult (especially about the future, as Yogi Berra said). We all know this. Most of the things that most programmers are doing have known and predictable effects, because most of what we are doing is not new. If we had no idea how society would react to technology, why would we even develop it? Of course thinking about the effect that your work has on the world is a basic responsibility of any human being, how it could possibly be otherwise?


Respect.


I'm not sure it's so much a matter of picking negative consequences out of the blue sky of possible consequences, but rather wondering, "What happens if our software works exactly how we intend and we gain the ability to really manipulate people in a certain way?"

I agree completely that it's hard or impossible to just predict in general how society will react to technology. But in many cases, there are people designing systems specifically for the purpose of getting people to behave a certain way. This implicitly acknowledges that they believe that they can affect behavior in a predictable way. If they can make predictions one way and be correct enough to profit from it, they should be capable of making predictions about things that could go wrong.

I agree completely that predicting something like what Facebook will look like 10 years in the future is impossible, and there's surely a lot of research where the possible consequences are infinite. A lot of results in CS research are like someone developing a new material - "Like, sure, this could be used to make a new, extra deadly tank or something, but it could be used for literally anything." In that sense, it's not useful to make predictions, but I think there's also a lot of research going on at Facebook and Google that looks closer to what I mentioned above, where they absolutely have an intention to change people's behavior and so probably also have the ability to predict the consequences of their complete success.


These are two different set of people: computer scientists develop algos / ml / etc, and want to publish them. UX designers , entrepreneurs, and other customer-focused people in the industry observe human behavior, and (amongst other activities) design traps around them.

It is not useful to expect good predictions from the former group, and the later group isn't really interested in publishing in CS.


The Challenger disaster is a good example of the engineer vs group dynamic https://www.npr.org/sections/thetwo-way/2016/03/21/470870426...


"What happens if our software works exactly how we intend and we gain the ability to really manipulate people in a certain way?"

This is still really difficult, and quite ethically fraught.

When I was in high school and college (late 90s & early 2000s), I was one of the techno-utopians who believed that the Internet would usher in a new era of censorship-resistant free speech, free information, and free exchange of ideas with people across the globe, regardless of whatever walk of life you happened to come from. It would give a voice to marginalized people regardless of how niche their views are. By and large, it succeeded in this, and this site (HackerNews) is an example of that. I have no idea who you are or where you come from, and it could be literally half a world away from me, but I can still post this comment, and probably hundreds to thousands of people will read it.

Yet nowhere on our radar screen was the idea that maybe we wished some of those marginalized groups would stay marginalized. That in addition to giving a voice to dissidents and hobbyists and nerds and subcultures, we would also give a voice to racists and militants and pedophiles and drug traffickers. Why would we even consider them? The definition of "marginalized" involves being "out of sight of the mainstream". Yet now, one of the main complaints being levied against Silicon Valley companies is that they aren't doing enough to police hate speech or kick undesirable groups off their platforms.

And then this is hugely ethically fraught, because I'm sure that from the perspective of people in those groups, I'm the evil powermonger, and they are just exercising their human right to believe what they want. Silicon Valley has remained largely libertarian in outlook, but what if people get their wish and it starts cracking down on undesirables? Who's to say that your group would actually be one of the desirables? The new boss starts to look an awful lot like the old boss.


The idea that how we organize ourselves socially online might lead to echo chambers and deepening division was around long before the phrase "social media" had ever been used.

That this phenomenon came about through the heretofore unprecedented (sarcasm) exploitation of human credulity and vanity, that dopamine spikes are the very mechanism of addiction, and that we react more strongly to things that trip our amygdalae than our cortices, should surprise precisely no-one — least of all the people who were building tools predicated upon those things.

None of this stuff is new. None of this stuff is news. You don't get to play ignorant of the implications of the things you're building. Not while you're selling your time on the basis of how smart you are, anyway.


Perhaps not predictable at the outset, but surely FB developers had insights into the usage/sentiment data long before reporters got onto it?


> On the other hand, the people who design Facebook or mobile games to be maximally addicting definitely understand something about how their software is affecting society. It's not a huge leap to go from "I'm pretty sure that certain people will become obsessed with this," to "It's probably harmful for lots of people to become obsessed with this."

I don't agree. I think this causal relationship makes sense in hindsight, but I don't think most people capable of designing or implementing software can reliably extrapolate what its impact on society will be. That goes for both positive and negative impacts.

Researchers and engineers should still put the effort into questioning the nature of their work and its impact on society, because that's a good exercise in general. But I'm skeptical any given individual would be good at arriving to the right answer.


More-over, I don't think parent is giving enough credit to certain types of computer scientists and certain companies.

Facebook, Google, and some game companies definitely do hire lots of human-computer interaction Ph.D.s who definitely are trained in exactly this sort of thing.

There's a reason the title is "computer science researches" and not "computer programmers".


> On the other hand, the people who design Facebook or mobile games to be maximally addicting definitely understand something about how their software is affecting society.

This is an interesting point, a whole lot of companies do actually hire soft-sciences talent specifically for that purpose, has been quite common in the gaming sector for years now (Blizzard, Valve, Bungie, just from the top of my head).


This point backs up the statement of ~this parent~ rossdavidh's reply: programmers and engineers are woefully unequipped to perform any significant social science research. If they were, then tech companies would have little to no need to hire research psychologists or human factors specialists. They'd instead hire programmers and engineers that specialize in getting people addicted to technology.


So because the employer hires specialists in weaponizing people's brains against them (via manipulating addiction psychology, &c), the programmer is somehow off the hook for implementing things that he now knows to be designed thusly, on the basis that it's the work-product of those specialists?

I'm reminded of the Tom Lehrer lyric:

  Once the rockets are up
  Who cares where they come down
  That's not my department
  Says Wernher von Braun


You're counter-argument is disingenuous, almost to the point of willful ignorance.

It's obvious that a missile can be used to send bombs across the world just as well as it can be used to send satellites into space.

Minecraft seems like a great form of entertainment that can be used as an educational tool. The fact that it might also be destructive to the development of children is non-obvious and takes some expertise outside the domain of software engineering to uncover.


I wasn't talking about Minecraft, necessarily. But consider, for example, Zynga, or any of the makers of pay-to-play games that are designed with a thorough understanding of addiction psychology in mind.

You didn't design the mechanism, but you know what it does. Are your hands clean if you write that code?


> If a person can predict the consequences of design decisions well enough to get paid for it, they'll probably have some idea of potential negative consequences as well.

Being able to hypothesize about potential risks and being able to perform experiments to verify the hypothesis are totally different skills.

You can guess that maybe your game will be unhealthy for teenagers, but constructing an experiment to verify this is a whole different ball game.


I think you're way over-estimating what a realistic ability to predict outcomes will be.

I'm reminded of back when I worked in agency land, and clients would ask us to build something to "go viral" back when going viral was a new concept. They assumed it was as easy as just saying "make it so".

Of course, it's never that easy to know that it will catch on.

How exactly do you make something that is fun, engaging, and rewarding enough to encourage regular use, but not fun, engaging or rewarding enough to abuse? Seriously, if you can answer that question, you're sitting on a gold mine of knowledge.


It's much difficult to differentiate between "many people will enjoy and be entertained by this game" and "people will be harmfully obsessed with this game".


I agree it's hard to pick which games will be which, but it's not hard to predict that the more addicting a game is, the more people will be harmfully obsessed with it.


I see multiple top level comments spouting righteous indignation about this being patronizing to engineers/programmers, expecting too much of them, 1st order -vs- 2nd order effects, etc.

Firstly, this is for researchers/academics who are publishing in journals, disseminating ideas for new technology.

Secondly, the suggestion also clearly states that the work will be reviewed NOT by judging the utility in the possible consequences, but for the thoroughness of the analysis. In practice, that means that the paper's authors can not be more sloppy than the reviewer (a peer in the field who can be assumed to have a roughly similar understanding of the technology). Reviewer feedback can point out any blind spots of the paper. That strikes me as a reasonable and concrete setup unlike the FUD in these discussions.

Nobody is being asked to balance the positives against the negatives. They are simply requested to initiate a conversation about the possible negatives so that other researchers building on the idea can work towards improving those aspects!

(EDIT: Most papers/presentations already talk about the possible positive impacts -- nobody holds back on that for lack of expertise. It is only fair to ask authors to be less partisan and more thorough, as responsible researchers.)

Whether all engineers/programmers should take analogous responsibility in trying to anticipate the consequences of their work is a related, but different question, with different trade-offs involved -- because engineers are concerned more with building things and delivering practical benefits, while research (and academic scholarship) is typically more interested in a longer term view of things. Both are important but distinct questions. Please don't conflate the two points in this discussion.


I’m not sure enough people actually read the article, though. Here’s what the person proposing this change said:

> In my first ever AI class, we learned about how a system had been developed to automate something that had previously been a person’s job. Everyone said, “Isn’t this amazing?” — but I was concerned about who did these jobs. It stuck with me that no one else’s ears perked up at the significant downside to this very cool invention.

Statements like that are frightening to me. It’s incredibly hard to define “negative“ in a reasonable way, and I guarantee you that someone else is going to define it in a way you don’t like.

What’s the negative impact of the recent discovery of a non-quantum recommendation algorithm on the field of quantum computing? Should the young researcher have spent time considering and detailing that?


> What’s the negative impact of the recent discovery of a non-quantum recommendation algorithm on the field of quantum computing? Should the young researcher have spent time considering and detailing that?

If we consider the goal of research to improve society, then yes, I think it's reasonable to expect a researcher to spend some time considering whether what they're doing is going to actually improve society. In biology, most funding agencies require you to take a brief (~8-10 hours of class time) course on the responsible conduct of research, even if you aren't doing any human subjects research.

In the specific case you gave of the non-quantum recommendation algorithm, the negative impacts should be more or less the same as the impacts of the quantum algorithm, but with less of a barrier to entry. And even the person proposing this acknowledges: "We all agree it’s hard and that we’re going to miss tonnes of them, but even if we catch just 1% or 5%, it’s worth it."


I think in the case mentioned, the implication is that advertisers would be better equipped to target us with ads – which is a fair point.

But why must we burden the theorists with ethics, and not the programmers or companies who bring that theory into the real world?


Seems like Nature is encouraging modern-day Ludditism. We have a historical perspective to reject this out of hand: workers are now much better off than in the pre-industrial period. And yes, I understand that the past does not predict the future, but that cuts both ways. As a rule doom and gloom predictions do not age well.


Why is that statement so frightening to you?


Because it highlights exactly how broad the definition of “negative” can be. I grew up in a family of blue collar workers. My uncle worked at steel mill that shut down. Perhaps with some automation it could have at least stayed open. Regardless, he eventually found a new rewarding job.

My dad is out of a job now, too, but that’s because he got hurt at work, multiple times. He has had back issues my entire life because he got hurt at work.

The person proposing a consideration of negative consequences was “inspired” by jobs being automated away. Whether a researcher has considered the negative consequences enough will be determined by peer reviewers who have their own political leanings and potential agendas. In the end, I would expect the process to become completely subverted, without accomplishing much along the way.


It seems quite strong to expect "researchers disclose any possible negative societal consequences of their work in papers".

This isn't just a matter of knowledge and familiarity. It requires some intensely personal moral and ethical judgments. One person's moral evil (self-driving trucks putting truckers out of work) might be another's moral obligation (getting rid of the life-shortening occupational hazards of trucking).

The verbiage around fully disclosing the possible outcomes adds an extra wrinkle. Reasonable results require social and technological forecasting and weighing of what researchers thing is likely or reasonable. It's possible that the researchers we're discussing might not have extensive training in forecasting large-scale social behavior.

I would submit that it's worth this effort to catch some small percent of potential negative impacts only if we can do so without excessive costs from false positives. A scenario in which an excess of false positives overwhelms the number of true positives at great cost is a very possible outcome and one that merits more than casual dismissal. If we can't predict outcomes reliably and can't agree on how to weigh them, then some might consider it wise to not predicate significant and consequential decisions based on a combination thereof.

You're absolutely right. Researchers should be taking a look at the long-term consequences of their work. It's just possible that researchers should not assume that they're equipped to make long-term predictions or shape the efforts of their peers around said predictions.

Hecht's heart - and yours - are absolutely in the right place. We can, and should, think carefully and deeply about the consequences of all of our choices with compassion and humility.


> This isn't just a matter of knowledge and familiarity. It requires some intensely personal moral and ethical judgments. One person's moral evil (self-driving trucks putting truckers out of work) might be another's moral obligation (getting rid of the life-shortening occupational hazards of trucking).

Absolutely. We shouldn't expect the authors of one paper to decide on the utility of various consequences and alternatives. Also, those consequences don't exist in a vacuum, and depend on other assumptions. Eg: Automating jobs is possibly an ethical positive provided people don't need jobs to survive. And this provides an opportunity to kick off other lines of research and inquiry.

> ...excessive costs from false positives...

Could someone give concrete examples of the possible costs of "false positives"? In the picture I have in mind, papers are not accepted/rejected based on whether someone likes the consequences or not. (as stated in the article)

> It's just possible that researchers should not assume that they're equipped to make long-term predictions or shape the efforts of their peers around said predictions.

I think excuses based on insufficient expertise is a cop-out. There is no social science specialization with predictive models such things. The authors are expected to have the most insight into the specific aspects they are writing on, and generally educated well enough to be a functioning citizen of modern society. If they can vote, they can damn well put some thought into possible consequences.

--

The way I see it, there are two goals underlying this suggestion:

1. Get a better understanding of the possible consequences of different lines of advancement, and, as a community, encourage the "positive" ones while discouraging the "negative" ones. (for some value of positive/negative, which might well be emergent only after much back-and-forth)

2. Get each and every researcher to actively think about the ethics of their research and the problems they choose to solve.

Most commenters are focusing on only the first one; I think the second is equally important. If the focus was mainly on the first problem, we might choose to have editors writing editorials every few months on recent advancements in some topic. Or a conference panel discussing the same. Those will help towards the first goal, but only partially towards the second goal, since most researchers could "outsource" ethics to others.


> Eg: Automating jobs is possibly an ethical positive provided people don't need jobs to survive.

Or maybe automating certain jobs is an ethical positive because means fewer people die in coal pits. You're very right that this is an excellent opportunity to kick off other lines of research and inquiry. It's possible that what is essentially a journal of applied mathematics might not be an ideal venue for such, given that journals of ethics and economics already exist.

> Could someone give concrete examples of the possible costs of "false positives"? In the picture I have in mind, papers are not accepted/rejected based on whether someone likes the consequences or not. (as stated in the article)

What would be the point of a negative social consequences disclosure requirement if there were no consequences whatsoever for leaving one off or being glaringly silly in the disclosure? The point of this proposed process is that it is ground to reject a paper and require further revision.

To put it another way: the process proposed will either have false positives or it will do nothing. The extent to which it has false positives that exact a cost will be determined by the extent to which the process works as intended to discourage research deemed to have negative social consequences.

Rejecting a paper might end a grad student's career. It might prevent disclosure of a significant finding in time to have a major impact on something important. We have no way of anticipating the consequences of this. Perhaps this proposal should come with a discussion of the potential negative social consequences of it, given that some can already be reasonably foreseen.

Given how abstract that is, I shall offer a more concrete example. My past involves research into how to make circuits more efficient and easily produced reliably and at scale. At no point in this process did anyone stop and think about if this might have applications in computers used as part of weapons systems. Which, looking back, is obviously one application of computers.

> The authors are expected to have the most insight into the specific aspects they are writing on, and generally educated well enough to be a functioning citizen of modern society.

You're absolutely right. The authors can be expected to have deep and significant insight into the technical or mathematical aspects of their work. It's possible that this might not be the same as having the most insight into economic or social consequences of their work.

> 1. Get a better understanding of the possible consequences of different lines of advancement, and, as a community, encourage the "positive" ones while discouraging the "negative" ones. (for some value of positive/negative, which might well be emergent only after much back-and-forth)

This is a laudable goal! And it's perhaps worth considering if there might be unintentional and perhaps "negative" side effects to such a process.

> 2. Get each and every researcher to actively think about the ethics of their research and the problems they choose to solve.

If this is the goal, then perhaps the requirement should be for every paper to be accompanied with an ethical statement which is in no way part of the review process. That would satisfy your needs without risking significant unintentional side-effects, would it not?


Since the review process is iterative, the reviewer will point out what they consider to be a shortcoming, and the author will fix that part. If they disagree, and the author is keen on not mentioning those concerns even as worth considering, but the paper is otherwise good, we could evolve a system whereby the reviewer comment is published along with the paper.

> If this is the goal, then perhaps the requirement should be for every paper to be accompanied with an ethical statement which is in no way part of the review process. That would satisfy your needs without risking significant unintentional side-effects, would it not?

How do we stop this ethics statement from being vacuous, unless it is reviewed? It might quickly devolve along the lines of TOS/EULA (legalese to include all edge cases) or reduce to something trivial like: No fingers were harmed while typing up this paper.


> Since the review process is iterative, the reviewer will point out what they consider to be a shortcoming, and the author will fix that part.

That takes a non-zero amount of time, and thus funding. So now we've got a system where disagreeing about someone's forecasting of potential social consequences or their decisions around ethics imposes costs on mathematics.

Even processes intentionally and explicitly designed to be iterative have an unfortunate tendency to stop iterating as conditions change. This adds one more reason for papers to not reach publication.

> If they disagree, and the author is keen on not mentioning those concerns even as worth considering, but the paper is otherwise good, we could evolve a system whereby the reviewer comment is published along with the paper.

That sounds like a good idea. In fact, why not just skip the review portion of it and publish reviewer's comments on a paper's ethics along side it every single time? It would seem to get to the same point more quickly!

> How do we stop this ethics statement from being vacuous, unless it is reviewed?

Here we have the crux of it: the whole point of this proposal is for there to be real consequences attached. Which brings us back to my point about false positives and side effects.

I suspect that in most cases, the statements would correctly be vacuous. Many - probably most - papers in computer science have minimal to no obvious or reasonably forseeable social implications.


> EDIT: Most papers/presentations already talk about the possible positive impacts -- nobody holds back on that for lack of expertise.

That's not really accurate. Sure, people talk about positive impacts - but in a technical CS paper, they talk about positive technical aspects, e.g. "this new method will allow you to train ML models faster". They don't usually talk about societal impacts, which is what the article is about.


> Firstly, this is for researchers/academics who are publishing in journals, disseminating ideas for new technology.

The implicit assumption is the engineers will use it to make something bad. The scientists must anticipate what engineers will do and document potential negative actions. At worst its useless boilerplate, at best its advertising all the negative uses. Science and Engineering are inextricably linked when it comes to impacts on society.

The next step of course is to stop research if its determined someone can do something bad enough with it. Why else would you go through the effort of documenting it?


I get that they're trying to encourage ethical behavior, but this proposal seems incredibly arrogant. It implicitly states that computer programmers are, as a field, competent to predict how a technology will impact society. Whether or not ANY field is competent to do that is questionable, but whether or not computer programmers are able to do that is not questionable; it's clearly incorrect. We don't have the background, training, skillset, or in some cases even personality to be making insightful predictions about how society will respond to and be impacted by a new technology. I'm not saying the person advocating it is a bad person, but this particular proposal is arrogant (and founded on an incorrect understanding of what kind of thing programmers are good at).


They're not talking about computer programmers though (I agree they don't have the ability). The more interesting question is whether Computer Science (the academic discipline) should focus more on the societal impact of computing instead of solely on the technical implementation of computing (i.e. applied mathematics).


Do any of us expect that the computer scientists who invented and standardized TCP/IP could have reasonably anticipated Facebook and the fake news problem it would have?

I would posit that there was no good way to reasonably forecast it at that time. This isn't an irrelevant point, though it will seem that way at first blush. The point is that it's difficult to forecast significantly in advance the consequences of technological advancement. That the people asked to do the forecasting are not trained in analyzing social trends does not make this task easier.


Facebook per se? No. But I'm a math grad. Yet I ALSO have this ability to read English (and Spanish and some Portuguese, but I'm not here to brag.) Wow. Look at that, I've mastered a couple different sets of syntax that enable semantic understanding.

And there are books written in English that talk to the social history of the species. History, physics, etc, being subsets of English syntax (or other "human languages") tailored to describe specific semantics.

A casual look at what happened over the years as various social bubbles suddenly slammed into each other; Columbus and Americas, repeated conquering of Europe and further east.

Doesn't seem a stretch to think "information network could be used for malicious shit." And it has been for years, depending on who talk to (content piracy, trafficking, AI guided weapons, etc) before Facebook. So there's a plain history in contemporary times too. No need to dig that far back into history.

And it's not advocating for binning the abstract research, or avoiding making these things concretely. It advocates for fostering a conversation about the potential pitfalls.

Like programmers do before considering a change.

Deep knowledge of the specifics isn't necessary, IMO. We code heads are already aware of "change in system may have bad side effects. Let's avoid it as much as possible." Emotional consideration is already there.

This thread is devolving into childish attitudes about having to be part of society. Division of labor taken to its extreme, leads people to just disregard they are a part of a bigger picture (society). What did Adam Smith have to say about such things? All the way back in the 1700s. And he had no where near modern formal training. Seems rather spot on here.


You're absolutely right. There's a long, ugly, and documented history of what can happen when isolated societies slam into each other. The answer is almost always violence, exploitation, and oppression.

Is it possible that this might not have been the planned result of TCP/IP, however? At the time there was just one social bubble and the point was to enable two computers already in it to connect in a more standardized manner. The notion of using the system for malice largely didn't occur, because at that point there was no reasonable way to gain from doing so. People weren't trying to engineer globe-spanning society-merging information networks.

But never mind that. Reasonable people can disagree on questions like the accuracy of our ability to correctly predict the future from 45 years away and counteract those potential distant negative consequences.

Moving on to what I consider the more interesting point: the process proposed. It's pretty clear that this isn't about fostering a conversation. You do not need enforcement mechanisms to foster a conversation. Enforcement abilities - like rejecting papers, which always comes with the risk of killing a paper forever - are about coercion. Maybe there's another way, like including ethical statements outside review processes, with papers that could both foster conversation and not rely on coercion?

You're right again! Fostering conversation is immensely valuable. We are all part of a vulnerable and critically important society. It is our moral obligation to think about that as we look at our work and discuss it with our peers while also being humble and aware of our own limitations. We can, should, and must do this absolutely critical work. The future depends upon it. We are called to the work.


> whether Computer Science (the academic discipline) should focus more on the societal impact of computing.

The people behind http://interactions.acm.org/ would say YES. I bet most of academia would say yes.


The general public too. Ask any non-nerd if they are aware that by accessing their Gmail account via a third party service, that that service can read their emails. I’m sure that the majority will be freaked out by even the notion that that would be possible. Horrified even.

Then ask them if they understand just how much data Google & Facebook has on their browsing history via cookies, third-party trackers, adboxes & share buttons.

Then ask them if they know that, if they did an ancestry check, their DNA data may soon be shared with their insurance provider?

Then ask them if they know that their credit history was probably leaked to the public via the Equifax hack?

Then ask them if they know that all that data is legally allowed to be used forever with minimal oversight other than what’s mandated by law?

All of the above is obvious to nerds like us but the public at large haven’t got a clue about any of this.


> The more interesting question is whether Computer Science (the academic discipline) should focus more on the societal impact of computing instead of solely on the technical implementation of computing

This is a very long and especially old discussion, I'd say it approaches 150-200 years, and it first started out as a reaction to the optimistic views of positivists like Auguste Comte. Afaik nothing conclusive has come out of it yet, even though the discussion focused on some heated topics during all this time: should the physicists that invented the atom bomb be held responsible for Hiroshima? should the chemists be held responsible for the mustard gas used in WW1? should the bureaucrats with their focus on modernizing the State and having everything well-labeled and well-counted for be held responsible for the fact that that made the Nazis' job a lot easier when they needed to find the Jews among the general population? etc etc


It’s not about holding particular people responsible in hindsight.

It’s about having a public debate to decide what laws are needed to robustly protect the public before the technology is even misused.

So in your examples it’d be:

- when the physicists inform the public, MAD is quickly determined as the outcome of that technology and laws made to prevent nukes being made in the first place. (And optimistically, to negate wars ever happening again due to the possibility that an aggrieved side might develop nukes) Or, because the public debate did not happen until after they were developed and deployed, to allow nukes to be developed and defer wars between nuke-enabled countries to hyper-diplomacy while making every effort to restrict development & deployment, like we currently have.

- when the chemists alert the public that chemical weapons are possibly far more effective than anyone ever imagined, then laws are made to ban their deployment. As is currently the case after informed public debate. But instead of informed public debate before the fact, millions were gassed during WW1.

- when the bureaucrats realist they can identify large swathes of the population unrestricted, laws can be brought in to restrict data gathering to protect the public from such actions.

In all of your examples, an informed public debate could have urged politicians to create laws to robustly protect the public from these technologies.

The key note to realize is that the public debate will happen. It’s inevitable. The key role that the creators of these technologies can play is to raise these ethical concerns before harm is risked. Even though the public debate did not happen until after all of the above technologies were misused, they resulted in swiftly creating laws to robustly protect the public from that technology. Delaying the debate does nothing other than risking disaster.

I know that last line sounds like hyperbolic exaggeration but it’s an inevitability that, if ignored, is unethical.


It's also worth considering that the public might decide that all this mucking around with injecting people with virii sounds like trying to kill us all. So vaccine use gets banned, and lots of people die from preventable diseases.

Or maybe a town refuses to flouridate their water after a series of propaganda campaigns convinces people there that it's toxic and will ruin the purity of their water system.

I understand why you cast it as you do. History is chock full of preventable abuses! Yet, it's worth considering that such things can backfire as well. Public debate is not a panacea, and policy is often rushed and poorly formed to satisfy atavistic fears. A public debate can easily urge politicians to robustly protect the public from imagined threats while enabling real ones.


Only if the whole population succumbs to ignorance. Not saying that doesn’t happen, especially in light of stuff like net neutrality and the NSA et al’s data gathering.

But on the whole, the general public will drown out cranks who shout about imagines threats like fluoride or vaccines. The science on both is clear.

Likewise, I’m optimistic that the negative realities of net neutrality stifling startups and the mass data-gathering will become so apparent to future generations of lawmakers & voters that they’ll be reversed in due course.


It doesn't take the whole population succumbing to ignorance. It just takes a bunch of voters who are uncertain and scared. This happened in Portland, Oregon a few years back when voters were terrified into opposing floridated drinking water.

The science may be clear, but it may be unwise to be too certain that future generations will agree with what you and I believe in today.


> They're not talking about computer programmers though (I agree they don't have the ability).

This pigeon-holing was way beyond tiresome, not to mention grossly inaccurate, 20 years ago: please STOP IT. Computer programmers aren't in general some special class of person incapable of learning new skills and behaviours. Quite the opposite for the most part.


I'm not sure I understand why you call this pigeon-holing. The GP explained that the professional field of programming has very little overlap with the professional field of social dynamics, and I was simply agreeing with that observation. "Most programmers don't know how society will evolve" appears to me a similar observation as "most pilots don't know how to build a plane". I don't really see why that's an objectionable statement.


This is, at its heart, a question of ethics in engineering disciplines.

Software engineering is a combination of applied mathematics, data science, and significant engineering practices (do X with Y constraints).

In Engineering proper, we have ethics classes. They entail in how things can fail and cause item, structural, and/or injury up to death.

We have all read about the Therac-25. That was a software and hardware design choice that lead to the death of many.

Computer science seems to skirt the ethics discussions just by being too new of a field. We (royal) aren't looking at the failure modes of having social media. Nor are we considering the ramifications of the code we write.

Intelligence is the ability to write the code. Wisdom is the ability to understand its ramifications.


From whence does this wisdom come? Experience by oneself, or learning of the experiences of others. How does one gain experience in, say, "having social media" if we've never had social media until the last few years? Social media hasn't been around long enough for a single complete human generation to have experienced life without it.

When MySpace, Facebook, Twitter, et al. were created, the "consequences" were providing an outlet for expression or keeping connected with friends and family. Noble causes. How do we get from there to "...and guess what, it'll be bad for individuals or society in $THIS_SPECIFIC_WAY"? I don't think it's possible.

Ethics evolve with society. Society will develop the ethics necessary to teach computer scientists and programmers. Such things are already underway[1].

Further, I'm seeing too little nuance in the comments on this story. Someone puts a Raspberry Pi in a situation to be responsible for human lives and it fails - is Linus now culpable?

1 - https://www.computer.org/web/education/code-of-ethics


Easy example in recent software development history: the developers who built the software controlling emissions cheating on Volkswagen cars a few years ago. That lead to nearly a decade of overly polluting cars being missold to the public. Those elevated emissions are directly responsible to deaths among the populations in which those cars were sold.


It doesn’t take much study of history to see how cults form or the power of publishing anything you want as fact. Social media put the tools to do that in the hands of everyone.

I won’t make a moral judgement on that decision but it is dangerously naive to think all technology has only positive externalities. The criticism of the current state of software engineering is that we don’t consider those externalities.

“It’s new and we didn’t know what would happen” is a really shitty excuse to hear from an aerospace engineer after an airline crash.


A crash is the sort of outcome an aerospace engineering company should be able to predict. Celebrities burning huge quantities of aviation fuel on their way to and from conferences to pontificate about greenhouse gases isn't.

It's not clear to me that anyone can predict the consequences of technology even a few years into the future (much less the seventh generation that some folks talk about and that would, if adopted, bring progress to a halt), nor whether any elite should be trusted to decide whether to permit the adoption of some technology. (If you suggest the government, consider Trump and net neutrality.)


>It’s not clear to me that anyone can predict the consequences of technology

That’s the whole point of the article, no one person can ever figure that stuff out. That’s why the public needs tone informed. No matter how educated (malicious or not) an “elite” is, they’ll never be able to predict as much as the general public will.

Yet a wider public debate can.

Your example about greenhouse gases is a good example of this. Forget about the celebrities for a sec and look at the big picture:

The guys in the hydrocarbon industry were experts at their jobs: developing oil, coal & gas and selling those products. Likewise the car industry, experts at building and selling cars.

Their expertise is in building their particular products and selling them. They could not have been expected to understand the negative environmental effects. And moreover, it’s in their interest to ignore the negatives associated with their products because it’s much more profitable for them to produce cheap products that require little R&D.

It took insights from the general public outside those industries to realize how those products affect the population as a whole:

- Burning hydrocarbons releases far more CO2 than the planet can scrub via the carbon cycle, which has lead to the greenhouse effect. Not only that but certain types of coal produce the smog that once blighted our cities and literally killed people with weak respiratory systems. Not only all of that but the lead that was added to gas to prevent engines knocking, kills people too.

- Emmisions from cars not only add CO2 to the greenhouse effect but other car emissions such as carbon nanoparticles, Nitric Oxides, sulfuric compounds also result in measurable deaths among the populations living among those cars.

As a result, laws are passed to make cars release less emissions and also to reduce societal dependence on hydrocarbons. The public debate resulted in the population realizing that there are health hazards to both individuals in the short term and global climate in the long term, and resulted in laws being passed to reduce these health hazards as low as is possible right now.

Without informed public debate we’d never have realized that these negative effects occur and we’d never have saved as many lives as has been saved now that our cities are not drenched in lead and Nitric Oxide saturated smog.


No, but computer programmers aren't researchers publishing papers. I think the article is talking about research papers like this:

http://dblp.org/db/conf/focs/focs2017.html

So, what might the negative consequences be for An Input Sensitive Online Algorithm for the Metric Bipartite Matching Problem?


It's right there in the answer to the very first interview question.

> The idea is not to try to predict the future, but, on the basis of the literature, to identify the expected side effects or unintended uses of this technology.

Besides being (ironically) arrogant, your dismissal sounds to me like you want to disavow any responsibility computer programmers have for what the technology they develop is used for.

> founded on an incorrect understanding of what kind of thing programmers are good at

In my personal opinion, if you're not always thinking of the applications and effects on society of your work, you're not a very fully-developed human being and certainly not the best programmer you could be, to say nothing of being a good citizen.


Ah yes we've invented this iPhone thingie, we predict it will cause people to get addicted to cat memes, sending dick picks and the Russians to meddle with our election.


Russians have been meddling with our elections for a long, long time.


It’s not that hard.

Take an example: your team has developed technology that can identify, with five nines accuracy, the yearly income of a user. That’s innovative, for sure. And nobody is taking the achievement away from your team.

But you don’t need to be Socrates to realize that what you’ve created could be used in horrifying ways. The article is asking creators to take a step back, ignore the excitement of creation for one moment, and consider if what they’re doing should be done. And if the answer is Yes, then decide if a public debate is required to protect the public from that new technology’s misuse.

Now the first reaction from many will be to scoff “fuck that, I’m just building an app here, I’ve got an investment to recoup and money to make”. That’s valid, but it’s an unethical way of looking at the act of creation. Feel free to hide behind the “I’m just doing my job” excuse, but we all know where that ended up.

That is the crux here, politicians can only make laws to protect the public at large from misuse of your newly created technology if they know about it. And the public at large can only demand robust protection (in the form of laws made by their politicians) from misuse of your technology if the public at large knows about it. Without a public debate, new technology could be misunderstood by politicians such that even if they are aware of the technology and even if they legislate, they may not make laws that robustly protect the public.

Thus, from an ethical standpoint, a public debate is needed so that robust laws are demanded and provided in order to protect the public at large from new technologies.


The question is, who becomes the final arbiter? And if that decision is "we should hide this knowledge", how is that enacted? What are the consequences for the researchers? Can you name any specific examples of computer science researcher that you think clearly should be censored (in your example, should the software or algorithm to do accurate prediction become "illegal")? And in those cases, what should be the penalty for publishing the work?

I think if we start to delve into the actual implementation of this, and look at real examples, it will be clear that this idea of "research that can be used for bad should be not published" leads to a bad type of society.


Exactly, that is the exact question. Who should be the arbiter? The public. Everyone.

The physicists & engineers didn’t identify the MAD inevitability from developing nukes. The public did.

I’m not demonizing the physicists or engineers here either, how were they to know what the outcome of their developments could be?! But the public at large contains people who have other ways of looking at new technologies other than “this is innovative!” or “this will end the war!” this will be profitable!”. It’s these outsider viewpoints that are needed for the public to decide if laws are required in order to robustly protect the public from that new technology.

In specific implementation scenarios, these debates are pointless because we have the nukes & chemical weapons examples from recent history. The ethics are clear. I’m more speaking about developers of new technologies alerting the public at large about potential risks involved with their technology that would make robust laws protecting the public from misuse of their technology prudent.

But, in saying that, there are examples of programmers who could have alerted the public in the public interest, but kept their mouth shut instead such as the developers who developed the emissions cheat software for Volkswagen. Worse, they knew that their actions would lead to deaths among the populations in which those cars were sold because emission-related deaths have been intimately understood for decades now. That is a crime that was perpetrated at the frontlines of software development. Those guys knew what they were doing and their “secrecy” is nothing more than a bloody conspiracy to murder. They didn’t give a fuck how many have died from the emissions released by their cars during the decade they were on sale because “I’m just doing my job building the software I was asked to build” is an adequate excuse in their book.


Given how thoroughly the physicists who wrote the https://en.wikipedia.org/wiki/Frisch–Peierls_memorandum anticipated the use of nuclear weapons as deterrence, I do not think the supposition that physicists were unable to identify the MAD potential is accurate.


Though, to be fair, even if they aren't good at predicting societal impact, even asking the question may be enough to identify low-hanging fruit, even without expertise in sociology.


Further, as we get practiced at asking the question, maybe we'll get better at it.


I think it is even worse in its arrogance but not of the programmer but of the would be "ethical reformers" - it thinks the ethics of /theory/ can be determined beforehand, that it can be halted and following these judgements would lead to a positive outcome by holding the whole world back when others may stumble upon the same train of thought later. The fact that there is relatively little resistance between theory and practice does not change that.

Under that framework USSR's suppression of genetics in favor of Lysenkoist bullshit would be justified because an incomplete understanding of genetics could lead to eugenics travesties. It is like King Cnut trying to stop the tides only totally unironically and unsarcastically.


    > It implicitly states that computer programmers are, as a field, competent to predict how a technology will impact society.
Not always though, there are plenty of negative examples already, just don't make the society fall for the same mistakes.


I don't think you would have to believe that we're competent at predicting there impact on society to support this. There main benefit would probably just be getting people to consider the potential negative impacts of their work.


I agree mostly with this, I would say it's difficult for someone of any profession to try and predict how a technology would be used.


That’s not what’s being asked. No one person, or team of people, can determine that.

What the article is proposing is that creators of new technologies inform the public so that the general public can decide if new laws are required to prevent that new technology from being misused against the public at large.

There’s precedents here and I’ve written a ton of comments about them if you want to read a bit about why such a proposition is the only ethical option.


If you go a bit deeper in the paper it suggests engaging social scientists to predict them as well as computer science folks.


Isn't society supposed to democratically control the impacts of various things upon itself?


Not when the market has no moral compass with which to guide. Democracy is controlled by the market, and when you teach a computer to learn to pick the cars out of a picture so you can "prove you are not a bot" watch Real Genius you do have a moral obligation to what you take part in creating. If you say my piece is small and I can't stop them from making a facial recognition system/plate reader/password cracker then you are putting your financial gain above what your work does to others. You are part of the problem.


> Democracy is controlled by the market

This is the problem, because you know what else is controlled by the market? Your employment. I fully encourage going on strike if asked to make surveillance tech, but you will be fired for it. Then some other firm will be hired, and, oh well.

If "the market" controls democracy, to the extent that the "democratic" government demands surveillance tech because someone on the market wants to sell it, that's the problem to be solved, not a shortage of research scientists martyring their careers for no effective gain against the larger social pathologies.


YES! Totally agree but they can't continue to give the market the weapons that are being used to maintain that repression.


If by "democratically" you mean through the market, then yes, it's how this works.

EDIT: what I mean is, roughly, markets are how we usually screw ourselves up in the first place.


No, they mean through legislation.


[flagged]



None of those actions were the result of an informed public debate.

In fact, they’re all actions that show that an informed public debate regarding new technologies is needed to protect the public from not only unscrupulous businesses or external threats but to protect the public from their own authorities.

Without public debate and a public that demands robust protection in the fit of laws, misused technologies represent a threat to everyone until that debate is had.


Beautiful timing.


Even with perfect prediction, who is to decide, for everyone, what is good and what is bad?


Excellent point.

Someone else brought up the culpability of fission researchers for Hiroshima. That's one example of a major technical endeavor where even with historical hindsight people won't all agree on its ethics.


Regardless of the ethics of the initial developers of the individual fission researchers, they were experts in their narrow field. They could never be held responsible for not realizing that MAD was the only outcome that their work would result in. The development of the MAD determination was made by the rest of the general public once the full facts of what those initial fission researchers had developed came to public knowledge.

The article is proposing that that public debate happen during or before development so that laws can be made to robustly protect the public from a new technology in the event that it’s misused.


Yeah, I was just trying to add a reason that the OP's proposal wasn't workable.


Perfect is the enemy of good.


Get off your high horse man. Disclosing a worry or concern is not arrogance even if you do not wield the expertise to evaluate said concern. It is another input that may or may not be beneficial to humanity to assimilate, but let the reader be the judge.

It is impossible to assess the risk of unknown unknowns, so rather than assailing someone with adjectives and persecuting them with the crime of “arrogance” and “hubris,” just judge less and listen. If they’re wrong, move on.

Look at the mirror before you call another one arrogant.


> If they’re wrong, move on

why? aren't we allowed to judge the wrong? nothing wrong in calling someone arrogant if he's arrogant.


Except complete hypocrisy I suppose, which it falls upon the judge to assert as vice or virtue.

Better than invective is a genuine and sincere mutual examination of the facts — actual content rather than mindless reputational slander of an entire field of study.

http://www.paulgraham.com/disagree.html


Please don't put personal barbs in your HN comments, regardless of how arrogant some other comment seems.

Your comment would be much better without its first and last sentences. The inverse of a shit sandwich!


I don't think this matters. To my knowledge, much of what would cause the negative social consequences isn't necessarily coming out of the academy, or up for peer review. Instead, it's coming from companies who handle massive amounts of data and aren't submitting their tecnology (in general) as a paper to a peer-review board, but as a product to market. While researchers for these companies can and do publish, the end goal is profit, not publication.


The more I think about it, the more I feel "Information Asymmetry" is the root cause of such problems.

https://en.wikipedia.org/wiki/Information_asymmetry


we have this whole system of IRBs governing how we conduct research; big reason why


There’s precedent here. Einstein notified the US President when it became clear as to the danger associated with, what were then, recent developments in high-energy particle physics. That allowed politicians, who would have had very little knowledge of physics, to do their job effectively according to the “new landscape” that such developments presented.

The mistake he made was that he only alerted authorities. Which lead to atom bombs being developed in secret without any feedback from the public as to whether this was a direction that they supported.

What we need in 2018 and beyond is for current and future developers of new technologies to alert both authorities and the public so that politicians can legislate for that technology’s use with public input as to what limits are deemed appropriate by the public at large.

The deep fakes example highlighted elsewhere in the thread is a perfect example of this. How can politicians legislate for this technology when they don’t even know it exists? How can the public indicate to those same politicians that they feel that deep fake technology used to create revenge porn is something that the public wishes to be made illegal?

Laws, and society as a whole, are a feedback mechanism that rely on information. Those with that information have a moral obligation to alert the public to consequences that will affect them all.


> The mistake he made was that he only alerted authorities. Which lead to atom bombs being developed in secret without any feedback from the public as to whether this was a direction that they supported.

The risk there is that if you dithered the enemy could get it first.

If Pandora's box exists, there is only so much you can do to stop someone from opening it.


His disclosures were before the war. If the public had a chance to chip in, along with a public debate that would inevitably lead to our current MAD theory, then it’s reasonable to imagine that the current “let’s not go to war with each other anymore because MAD” might, even before WW2, have been “let’s not go to war with each other anymore, or develop these expensive yet terrifying weapons, because MAD”.

The key point is that without the public’s input, politicians acting rationally, decided to direct their engineers & scientists (also acting rationally) to develop these weapons in secret, with everyone involved knowing that nukes were “extremely powerful” and “terrifying” and “war-ending” but not fully appreciating the whole MAD aspect that happens when your enemies play development catch up.

Tons of technologies today, while not as much of an instant threat to our civilization as nukes, are loaded with ethical concerns that current politicians and the public at large are just not aware of. Just reference the deep fakes saga for one recent example. There’s currently no laws outlawing the use of deep fake techniques to produce revenge porn but there’s every reason to believe that these techniques will be outlawed for such use in forthcoming legislation now that the public and politicians are aware of the risk. Extrapolate that to any number of innovations. With informed debate we can keep laws ahead of the game and prevent new techniques and technologies from being legally used for unethical purposes.


Surely submitting technical papers for critical analysis before publication to ensure that problematic views are not given a platform or legitimized by institutions, and so that affected and marginalized communities are given more prominent equal representation, will help correct systemic injustices.

But sarcasm aside, a variation of the exact same quip could be made for a privacy and other assessments. I would argue that it is on other disciplines to keep up with what's going on in comp.sci to determine the impact of new findings on their fields, and for all STEM fields to be very careful not to invent administrative policing roles that can grant unmerited standing to arbitrary political interests.

If you are in privacy, you need to be on this. If you are in economics, political science, or equity fields, you would also need to be on this. Technology makes everyone necessarily more interdisciplinary, and instead of filtering out, they should be researching new developments that affect them.


I like the intent behind this idea and think that it is important for ethics to start becoming part of Computer Science. I'd be curious if someone with more knowledge about medicine could explain how those publications handle these dilemmas. Especially with regards to CRISPR Cas9 as that is probably the most famous recent discovery that needs some serious ethical considerations.

My fear is that if Computer Science doesn't start acknowledging the ethical consequences of the work being done that it will lead to a sharp increase in regulations. This fear is primarily held with self driving cars which some seem to have been rushed into production and have lead to some serious consequences.


CRISPR doesn't really mean the ethical considerations have changed. It's just a tool that makes genomic changes easier (and the jury is still very much out whether it can do so safely). Other tools already existed to do this, just more expensive and more challenging to work with.


I won't claim to speak for others, but my formal education in Computer Science included a course on ethics and professionalism. It presented several ethical frameworks.

None of those ethical frameworks would lead me to conclude that, say, self-driving vehicles are net-negative for society.

My personal experience conducting research in computer science is that at least some of it is too abstract to have clear social consequences. What are the social consequences of making it marginally easier and faster to produce chip designs that are easy to manufacture?


CS should take a few hints from more mature fields (I don't mean that pejoratively- it's just that it's a very recent field). In particular, let's look at what biology did once it was technically possibly to splice genes. A lot of biologists were terrified by what they believed the consequences of gene splicing were (anything from genetic social dystopias and superdiseases).

They gathered a bunch of data and had a meeting in Asilomar (a pretty conference center in the middle of nowhere on the CA coast). In addition to the scientists who actually had the technical ability to splice genes (~100 humans at that point had the requisite skills), some lawyers and philosophers also attended. They argued a bunch. And at the end of the day, they agreed on a voltunary moratorium on splicing, even though the argument for immediately, direct harm had not yet been made (precautionary principle).

Very reasonable predictions were made. Some of the more irresponsible claims were tamped down as being overly irrational. Some basic rules were applied on how to minimize collateral damage. Some strong rules were applied on disease-causing experiments.

It seems, looking back with the luxury of historical distance, that they chose wisely. It was probably because the level-headed people who focused on clear and present dangers, not absurd hypotheticals, managed to win the day.

Today, the logical conclusion of those decisions are just starting to be felt; we now have technology that permits us to make germ-line modifications (those are permanent, inherited changes), and we're starting to do actual trials with newly born children (this is, IMO, one of the most extraordinary moral and philosophical developments of my lifetime).

It's unclear to me whether CS has the same level-headed maturity that lead to this. Nor do I think people in CS can really predict the indirect outcomes of their technology. Instead, I think CS people have an obligation to push the technology as far as it can go within a community-determined set of boundaries, and report the direct implications thereof to the general public, who will then (indirectly through democratic mechanisms) create laws that restrict certain applications.


Most biologists are either in academic or government employed and do not need to even think about money/making a profit.

People working in CS are mostly in industry and do not have 10+ year timeframes to leisurely discuss ideas.

And what CS inventions are as impactful as atomic bombs (as someone mentioned above?) I cannot think of any CS inventions that warrant such extended ethical discussion.

The only exception would be computer scientists working on weapons in the defense industry. They should absolutely be held back by ethical concerns like this.


actually, the asilomar conference touched quite heavily on the rapidly burgeoning biotech industry.

I have a fair amount of expereince, being a biologist who did gene cloning who now works on large scale machine learning models.

TBH I can see a wide range of surveillance-related technologies enabled by CS that are as impactful (but not as explosively so) as atomic weapons (I think you actually mean ICBMs, since atomic weapons aren't an existential threat, while ICBMs are).


I suspect you may mean that ICBMs with atomic weapons are existential threats. Most people would not regard an ICBM equipped with a conventional explosive warhead as being capable of annihilating humanity.

If anything, this should hint at the difficulty of forecasting. The full impacts of some technologies cannot be understood without also knowing what they synergize with.


Yes, I mean ICBMs with atomic warheads.

That said, ICBMs with (conventional) thermobaric warhead could do a lot of damage- you could effectively take out a medium sized country's industrial sector in an hour.


The difference between the fields of biology and computer science, is that CS is way more abstract. It would be like asking, “what are the ethical implications of math?”

To this end, we need to distinguish computer science (theory) from technology (application) – it is the latter of which has direct effects on society.


Computer Science is a much bigger field than it once was. There are many aspects where compsi is useful from a interdisciplinary point of view, and this imho, entails working with researchers from different specific areas.

You shouldn't expect a computer scientist to deeply understand sociological/psycological/philosophical questions, but they can and should to some degree identify where knowledge from other fields is important. My understanding is that they are not asking researchers to predict the future (although this is apparently what the interviewee imply), but simply identify that there might be (societal) consequences, good or bad.

This is why in areas like social robotics and HRI you will find more and more articles that are written by researches from different areas. E.g. a Computer Scientist and a Psychologist.


Expect of others? No.

But are such questions important? Very.


I considered a similar idea during my last conference, when a casual stroll through the poster session left me thinking "I could do a lot of damage with the papers in this room alone". A simple example: one poster about how to change people's behavior with peer pressure over social networks, and another one about how to detect depressed users. You could do a lot of damage combining those two.

Ethics is not CS' strong point. Adding a statement "this is why I think this research is ethical" doesn't seem like a bad place to start, if only because it would force researchers to actually consider what they are doing.


well, when you use technology to uncover problems, you open the door to both fixing them and exploiting them. knowledge is power, and power can be used for whatever you want. and yet no one is thinking about pulling kids out of schools.


I get the intent, but at the same time, doesn't disclosing all potential negative consequences in the paper also provide a sort of guideline on how to weaponize their research as well?

Seems like a double edged sword, potentially making it easier to negatively utilize the research.


Sounds a lot like the "security through obscurity"-argument. I was never a fan of it. If we don't research and communicate the dangers, potential issues might slip! We need to be proactive in these things.


That's a good point. Security through obscurity doesn't prevent attacks, but may raise the bar of entry and delay a motivated attacker.

I imagine if all 3D printers were distributed with warning stickers saying that they can be used to print guns and knives that more people would try it than if it was less publicized.

Making it just a bit harder to discover the knowledge to weaponize something could give defensive researchers the time to build effective countermeasures.


That’s why informed public debate is not about adding warning stickers to things, it’s about creating competent laws that robustly protect the public.

In your example, the result shouldn’t be warning stickers, it’s laws specifically against building guns with your 3D printer. Or if right to weapons happens to be enshrined in your constitution, it’s aboyt legal liability protocols being amended to take into account the fact that a perpetrator used a weapon that they built in their 3D printer as opposed to a regularly procured weapon.


I don't really get the intent. The last paragraph makes an attempts to justify, but I think it doesn't make a very good case.

Think about some of the most famous and influential papers and try to come up with negative consequences. When Codd wrote A relational model for large shared data banks, should he have tried to think of ways big databases could be abused? When Ritchie and Thompson published the paper that introduced Unix, what could they predict? Or Knuth's many papers on compiler design?

Do other disciplines do anything similar? Computer science papers are often so close to mathematics, I suppose mathematics papers should include similar disclosures, right?

This feels a little like putting cancer warnings on everything in California. At least there I understand what they thought the outcome might be but maybe they didn't foresee how quickly people tune it out and you end up making it harder to communicate danger when it's actually there.


But if it's public then anyone concerned can watch for it, or it can be evaluated for regulation/law changes.


Since we can't predict the societal consequences, no, no we shouldn't 'disclose' them.

Also, this isn't the job of peer review. Peer review is hard enough, let us focus on reviewing the technical details, since that's what matters.


There’s precedent here. Einstein notified the US President when it became clear as to the danger associated with, what were then, recent developments in high-energy particle physics. That allowed politicians, who would have had very little knowledge of physics, to do their job effectively according to the “new landscape” that such developments presented.

The mistake he made was that he only alerted authorities. Which lead to atom bombs being developed in secret without any feedback from the public as to whether this was a direction that they supported.

What we need in 2018 and beyond is for current and future developers of new technologies to alert both authorities and the public so that politicians can legislate for that technology’s use with public input as to what limits are deemed appropriate by the public at large.

The deep fakes example highlighted elsewhere in the thread is a perfect example of this. How can politicians legislate for this technology when they don’t even know it exists? How can the public indicate to those same politicians that they feel that deep fake technology used to create revenge porn is something that the public wishes to be made illegal?

Laws, and society as a whole, are a feedback mechanism that rely on information. Those with that information have a moral obligation to alert the public to consequences that will affect them.


Interesting point re: Einstein. However, I think what he did was literally to ensure that the US won the war, not disclose a societal consequence. Since that was an existential threat, it falls under different rules. Object detection is not an immediatel existential threat.

Also, I kind think the secrecy of the manhattan project was essential, and after carefully reading multiple histories, I am convinced that Leslie Groves was a genius who did more than a good job managing the project. If you look at how he declassified the project, they did everything right. THey had a historian with access to all the data. And scientists who understood the context. And they worked together to release as much information as they possibly could, and even helped make the case for transitioning control of weapons to civilians.


Of course they were all geniuses. And the project as a whole definitely was done in as ethical a way as possible. They weren’t bad people and I’m not implying that they were.

But the point isn’t anything to do with any of that. The point is that without the public’s input, these weapons were developed and used in the very first place, without an informed debate to decide if this is how we want to conduct ourselves.

This kind of debate was as common back then as it is today and back then it lead to chemical weapons being banned after WW1 by the UN with the support of pretty much every country.

And since then, for the few years that the US was the only one with nukes, if not for the extreme restraint practiced by the higher ups, they might have happily used nukes more times (at the time, there was huge pressure to just nuke north Korea to end the Korean War).

Once it became clear that the “enemies” will catch up in the development race meaning that any future war would be nuclear, the public debate clarified very quickly upon the current philosophy: MAD is the only outcome in a nuclear war.

If this debate had been allowed to be fully conducted in an open and informed manner before nukes were developed, we might live in a world where nukes were outlawed at UN level in the same way that chemical warfare was, long before they were even developed in the first place.

Extrapolate that thought to aggressive data-gathering & storage by social media sites, genomic information ancestry services, tracking technologies and techniques developed in the name of marketing, facial recognition technologies by security firms, Three-Letter agencies recording and monitoring every web user’s actions, profiling techniques to identify depressed users etc etc etc. Right now laws are not robustly protecting the public from misuse of these technologies. In fact, a lot of the misuse of the above technologies is directly due to the fact that the politicians know that they’ve got a powerful technology at hand and decide to develop it. And when informed public debate happens, when the negative outcomes of misuse of these techniques becomes so obvious to the public at large that they demand political action en masse, laws with more robust protections for the public’s data will be forthcoming in future updates. But those are the technologies that we know about. (At least, we nerds!). What is currently in development that has similar potential to be weaponized or misused that none of us know about yet?


I gotta say, ultimately, I support nation-states who develop weapons to address existential threats in secret.

I do so because I think any nation which doesn't do so will be replaced by one that does, and surviving is better.

By the way, I've already kind of gone through these sorts of thought experiments, and voted with my genome: I open sourced the data in my genome voluntarily (https://my.pgp-hms.org/profile/hu80855C) because I don't really ascribe to the "grim meathook future" scenarios you're describing.

In general I think informed public debate is great, but in the specific case of the Manhattan Project, I really don't think having an informed public debate during the war was even a remote possibility.


Einstein alerted the authorities about the dangers on Aug 2nd, a month before the war had even started and long before the US had joined the fight.

But the authorities kept it secret, choosing to develop these weapons.

But, had those weapons not been developed in secret and an informed public debate occurred before development, that debate would have inevitably lead to the MAD doctrine. That is the only inevitable outcome of nuclear war.

Without the guidance that MAD provides, politicians in 1939 did what they thought was the rational choice, and chose to develop these weapons so that they weren’t left unprepared if the enemy developed their nukes. But with the knowledge that MAD is the only outcome in a post nuke world, totally different scenarios become possible. They may have chosen to take action at UN-level, anti-nuclear proliferation treaties even before they got developed? Stifling the public debate just delayed the notion of anti-nuclear proliferation treaties but it did not stop them. The only result of an informed public debate on nukes is anti-nuclear proliferation treaties.

If all that had been done in 1939 as opposed to the 50’s & 60’s, the war could have been prevented before it even began!

Regardless of the crazy whataboutery, the lesson is that the public debate resulted in the ethics being decided as: “nuclear technology leads to MAD when used to create weapons, therefore laws are needed to protect the public from misuse of nuclear technology”. This debate is needed for every new technology.

Technology moves fast, faster than ever these days. And currently, we’re used to a situation where we typically operate these debates retroactively, legislating against misuse after the fact, when new technologies are created and then misused. Whereas, as this article is highlighting, the ethical way of doing things would be: to have informed public debates that allow laws to be created to robustly protect the public before a new technology is misused.

Note that nobody is accusing “new technologies” as bad. Or evil. Or anything like that. The call is simply for creators to slow this public debate to take place openly, to decide if the new technology can be misused and its repercussions in the event of that misuse and if new laws are needed to protect the public, before the possibility of their new technology being misused is even a factor.


We may not be able to definitively know all societal consequences, be we can certainly predict some of them.


And how accurate will those predictions be?

Nobel predicted that the dynamite will end all wars. Turns out, he was very wrong. On the other hand, plenty of technologies in recent decades have created predictions of impending dystopias, none of which materialized.

This all sounds like trying to blame scientists for not telling us the bad people will do bad things, instead of stopping bad people when they're doing bad things.


Hold on, you’re ascribing a lack of dystopias because you don’t see one right now. That’s because laws have been passed to restrict that possibility. The public debate has been conducted on a lot of technologies that could have lead us to a dystopia yet you’re ignoring them totally because dystopia never materialized?

- police state actions and encouragement: entirely possible, especially now that we’ve all got location trackers that also contain our innermost thoughts in the form of our text messages, yet using all of that info against us is totally illegal.

- subliminal messaging & propaganda: never more possible than right now with our totally connected lifestyles, yet totally illegal.

And on and on. Just because the worst hasn’t happened doesn’t mean it wouldnt happen if left unchecked.

Also, Nobel thought he’d created the most explosive substance that would ever be possible, with hindsight we know that TNT wasn’t that but nukes absolutely are; and his conclusion was correct in the context that nukes are the next step up from TNT: with such powerful weapons, battlefield wars between those who hold nukes have effectively ended. (Proxy wars notwithstanding, if those countries had nukes they wouldn’t be used as pawns in a proxy war.)


> since that's what matters

I think the societal consequences are "what matters"

The technical details _literally_ don't matter without societal consequences


No, scientific articles job is to disseminate knowledge in an impartial way. Peer review exists to verify the technical details are not obviously in error.

Since we can't predict the consequences of our actions (few scientists, if polled, would accurately predict the consequences of their papers), it would be entirely speculative.

Imagine a paper that showed an amazing improvement in machine learning- we could suddenly predict diseases just by looking at pictures of people's retinas, or predict any number of other fairly private things. Shoudl a technical reviewer say "I think this could and will be used to discriminate against people, so it should not be published"? I'd far rather the reviewer just say whether there was an error, have the paper be published, and let the faculty of 1000 or the larger community debate the legality of whether the technology could be restricted.

Note that editors, not peer reviewers, have far more latitude to reject papers due to (their perceived, speculative) societal consequences.


Compsci researchers arent the best people for predicting societal impacts of technology. It is a different skillset. People are not anywhere as predictable as machines. You need focus groups, blind studies, and linguists to build your survey questions. And you need the patience to work with people who really dont care to learn anything about technology. Many a predicted revolution has been laughed away once the non-techs had their say.

Google glass.


I’m pretty sure the people at Facebook fully understand what they were doing when they gamed the ux to get people addicted.


And that's how it should be. Technology that is obviously beneficial is rapidly adopted by the populace regardless of market forces. Technology that plays with human notions of decency and morality is often just as rapidly shunned. The ultimate filter of what is good or bad is general acceptance.


Thinking back to my time in grad school, I feel that the list of things I would have come up with as negative societal consequences would have at least outed my political opinions and at worst made me look paranoid. You could greatly mitigate the political aspect by not attaching a value judgement to which foreseeable societal consequences should be disclosed. You might be able to mitigate the second effect by having some guidelines about likelihood and how long-term the consequences need to be.

Over time, people will probably turn this task into copying and pasting a standard list for their research area. Existing researchers still mostly won't have to think about societal impact, but maybe it'll have an effect on which research areas people choose to work in.


What is a negative social consequence? In the interview e.g. the 'destruction' of jobs through automation is given as an example. I personally do not agree that this is a negative. Freeing people from forced labor and allowing them to have other pursuits I see as truly positive. The fact that we maintain an unsustainable socioeconomic system in which one's right to live is tied directly to one's role in a planet destroying economic production system is the real problem.


What are 'negative social consequences' ? That seems like a value-judgement to me and potentially highly political.


A more reasonable thing to do is to form a council of computer scientists, psychologists, sociologists, economists, politicians, and people otherwise capable of having a useful opinion in the matter.

That council would then have the task of reviewing new technologies and their potential impacts. The council may then have different subgroups each discussing a certain technology, or possibly each with a certain specific task such as: reviewing research papers, watching the developments inside companies, writing legal propositions, etc...

This council could then be under a government agency, or better yet, an NGO that can get funding by the government, from donations, universities, etc...

What would you think about that?


Absolutely not!

A council, no matter how large, is, relative to the general public as a whole, made up of a small number of individuals.

Such a council would raise so many ethical concerns, such as: Who are these individuals? Why are they on the council? What are their biases? What does their livelihoods ultimately depend on? What conflicts of interest do they have? Among hundreds of other questions.

Only an informed debate among the general public can lead to laws that robustly protect the public at large because vested interests get drowned out by informed dissent from other, equally-qualified, voices who are not affected by such vested interests.


Since most people seem to be missing some context and commenting about the headline, just be aware that the proposal is smaller. Most papers have a section that essentially says "This work is important because of XYZ. It will enable better ABC." They already talk about societal and technical consequences, that's nothing new. The proposal is to not sweep the bad bits under the rug and be as rigorous in that section as in the rest of the paper. It seems fair. If you want to hype why your work is high impact, then you have to be honest about the anticipated impact.


This article is not about what could happen in the future.

This article is about what does happen now, and is stigmatized against being disclosed due of fear of personal reprecussions.

That was never the point of academia. These institutions exist with all their rules and regulations precisely because to discover and uncover, one must not be afraid of how it will affect oneself from keeping a roof over one's head.

Presently, this article is being severely misinterpreted.


There was a story on public radio recently (maybe Radio Lab?) about synthesizing voice. Adobe has developed a "Photoshop for audio". One of the hosts interviewed a computer scientist on the subject and really pressed on the ethical uses of such software. The scientist was flummoxed and had not really considered the ethical implications before. I would not be surprised if scientists do not consider how their research could affect society.

https://arstechnica.com/information-technology/2016/11/adobe...

https://www.wnycstudios.org/story/breaking-news


It is absolutely nonsensical to think that someone who creates a thing should be held responsible for every negative use or misuse of what they create. It's even more nonsensical to expect someone to come up with every single one of these scenarios that could possibly happen. It's just not realistic, doable or even helpful. At best, it will have an extensively negative effect as every single new development will be tut-tutted into the shadows by a group of hand-wringers, terrified of a user 7 sigmas away from the mean might do with it one day. This is an appalling way to construct a society, let alone a discipline that requires rapid, ground-breaking iteration.


I think a lot of the HN community seems themselves too much like grand inventors. Prometheus bringing fire down from the mountain ignorant of the future of war, gun powder, and bombs.

My career in technology has taught me its a lot more obvious. When your marketing team start buying massive collected data, or your sales team starts selling ad data to 3rd parties. We know things happen, and the grey morality of it. Yet we turn our backs to it.

The reason I think HN is so quick to jump on these topics is because if we search ourselves we see that our morals rarely stand up to our self-proclaimed ideals. Jobs are sometimes hard to find, stability is nice, and dying on that hill doesn't always seem important.

Those are negative societal consequences we are most dishonest about it. Not Promethean illusions of grandeur.


This a tough one, since researchers don't control how their technology is used.

And it's very difficult for even the experts to balance the positives against the negatives: facial recognition could be used to find a serial killer ... or a missing child.

As one example, RMS has long advocated against the "ethics" of certain technologies and companies, while Emacs and GCC have greatly accelerated the adoption of technology, generally. And those tools have almost certainly been used directly or indirectly to build some very, very bad things.


There used to be an organization called Computer Professionals for Social Responsibility CPSR. It grew out of Xerox Parc. I just looked at their web site and they don't seem as active anymore.


Warning: this sorting algorithm may lead to the proliferation of facism.


I see this as a natural extension to the “Threats” section. In comp sci (esp software engineering) conference papers are required to have a “Threats to Validity” section to address the reproducibility and generalisability concerns. NSF needs proposals to describe the broader societal impacts but it’s typically to answer the question of — how will this research positively impact society? But I’ve rarely seen it being used for negative societal consequences..


The ethics of a technology dont' have to be so lofty. It's a real kick in the happiness meter when some code you have written puts ppl out of work. At this stage in my career, I actively make arguments of why I (we) shouldn't do something avoiding the intent of reducing labour costs.


But basically the whole point of doing something in software is to automate things that are labor intensive. If you're avoiding things that save work you're not doing your job as a software engineer correctly.

It's kind of like using an Ox in modern farming because you want more farmers to have work. You wouldn't do that because your farming business (barring some great marketing that lets you recoup the costs) would just go bankrupt quickly.


> But basically the whole point of doing something in software is to automate things that are labor intensive.

I see where you're coming from but that's quite an oversimplification, don't you think? Software that drives a computer's graphic card, for example, which in turn gets used to run software that enables a physician to explore a 3D MRI, are just two examples of tasks that do not consist of automating manual labour.

(You could probably pay a lot of painters to paint a lot of pictures but hooking them up to the MRI might be a little difficult).

There's a lot of software that achieves new things, rather than old and well-known things really fast and without involving hands or a human brain.

I prefer to avoid discussing ethics on forums like HN, so I won't comment on whether or not working on software that puts people out of work is "the right thing to do" -- suffice to say that, if you wish to avoid doing that, there's plenty of room to do it while still doing your job as a software engineer.


> There's a lot of software that achieves new things

Precisely. And the only way those new things are thought up is if someone has the time to dedicate to the task. If that person instead has to spend 12 hours a day foraging for berries just to stay alive, then they won't have time to build 3D MRI code.

The gains to productivity brought about by mechanical -- and now digital -- automation have lifted billions out of poverty and extended their lifespans by decades. I agree with GP that the only purpose for software is to automate a human task. Lucky for us, software is doing things that humans can't do at all (or would take years of effort, in the case of your MRI picture).


I disagree. You could do engineering to solve a wide range of problem. Some problems are cost reduction, but some aren't : you could have an engineering composant in an artwork, a new luxury unique product, or any other societal problems not oriented on cost-reduction.

By the way, you can always find a "cost reduction" in anything you do, it's a matter of point of view and horizon.


Productivity gains are the only way to raise the standard of living for the broad population. Automation is a huge part of that. Think of the vast improvement of the productivity of labor for agriculture and textiles that occurred during the Industrial Revolution.

Now, one farmer can work a hundred acres before breakfast with crop yields higher than ever due to technology. Would you have us go back to an ox and a plow so as not to kill agricultural jobs? Same story for textiles. Should we reopen the vast mills that employed thousands? We don't have elevator operator jobs. The printing press put all the scribes out of business.

Contrary to your viewpoint, the economic thing to do would be to automate as much work as possible. When labor is more productive, you can earn the same money in less time. This is how we can afford to not work weekends, not because of labor laws. When part (or all) of a person's job is automated, it frees them up to do something else. More importantly, they can take some workload from someone else's plate, freeing the other to do higher-order work.

How many brilliant minds throughout history never made a significant contribution because they spent their lives toiling in some field or hunting/foraging for their own food? It is only when material needs are easily obtained that the time is freed for the intellectual to think. Historically, this was only the aristocracy, privileged from birth, that had free time because of their serfs and vassals doing the labor. Compare to today, where anyone with enough intelligence can make such contributions, precisely because they're not trapped in a brutal subsistence lifestyle. (Yes, there are tons of issues about education and opportunity here. But we don't have to gather our food to survive anymore.)


I think that's an essential point, and though it's basic economics, it's often overlooked (and rarely made here). I'd take it a step further and say the grand project of building a better society (more freedom and economic opportunity) also is overlooked.

However, while automation helps in the aggregate, the world isn't lived or experienced in the aggregate but on the individual level. Put Jeff Bezos and 9 people living without shelter or money in a room, and in aggregate their average wealth is $15 billion per person; but that number tells you nothing about the individuals. When the factory closes due to technological advancement, sometimes those individuals can't get another job or one that pays nearly as much.

So we need to both encourage technological advances that increase productivity, but make sure it benefits individuals. I think the basic principle is that capital can move much more quickly than people - e.g., you can pull the money out of a factory in Kokkata or Indianapolis and invest bit in one in Hanoi much more quickly than the workers can move to Hanoi (probably impossible) or find another job anywhere. The answer may be much better unemployment insurance, retraining, and laws slowing economic changes enough that individuals can keep up, to a degree based on the economic impact: e.g., closing a big factory in a small town is higher impact than closing a startup in London.


Putting people out of work is not necessarily a bad thing. The clothes washing machine put a lot of people out of work, but I think society as a whole is better off with it. I sure don't want to live without mine.

Of course, society has to be able to sustain and re-educate these people for new productive jobs. That's the part where most societies have failed, leading to civil unrest.


Should Nature disclose negative societal consequences (real, and present, not predicted)?

It’s not like CS researchers are the ones turning science into a dumpster fire to sustain 30% profit margins.

Most of the more hysterical screeching about AI, automation, etc is reminiscent of buggy whip manufacturers bitching about cars.


That attitude is what turns non-nerds away from asking our opinion on the technologies that we develop.

If you don’t want to be involved in the debates about AI, automation et al, then you can’t really comment on those who do. If you do want to chip in, inform the public about the realistic dangers. What they need is realistic information from experts so that they can deduce what risks are acceptable and what risks are unacceptable.

Even the most untechnically-informed member of the public will get the positives: AI, automation, (or whatever technology you’re developing) makes software better in someway, doesn’t even matter how or why, it’s taken for granted that you’re developing something that you believe will be a net-positive.

But it’s the negatives that are what needs to be legislated for. Regulations and laws don’t give technologies a pat on the back, they protect the public from misuse of a technology. Your voice would be welcomed.


Automation will destroy certain jobs (as did cars for buggy whips). What pisses me off (and I believe ought to piss off others) is the monetizing of these types of works behind paywalls (as Nature and Science, the journals, are wont to do) when public access to the primary sources would help inform debate.

Reading tarted-up flag-planting arXiv papers can be tough going even for practicioners, to say nothing of the general public. But ultimately the general public will have to vote on propositions, or vote in representatives, to pass sensible (not hysterical) regulations.

For example, after an Uber lidar-equipped vehicle killed a Phoenix jaywalker (a separate part of law that I do not like, but acknowledges physics and shitty drivers), Uber halted testing in most markets. This is sensible based on public reaction, but three major problems compounded to take Elaine Herzberg’s life:

1) the lidar did not function as designed, 2) the backup driver did not function as instructed, and 3) Ms Herzberg did not obey the law.

This is a situation where the facts eventually became clear, and still it is difficult to say that one side or the other overreacted (personally I think Uber did the right thing PR-wise, but I also worry that less testing means slower progress towards safer cars, since human drivers are legendarily shitty and much less safe than conservative self-driving cars with functional lidar).

Now let’s consider more complicated problems, such as replacing medicinal chemists with reinforcement learning systems, or China’s AI approach to “social credits”. Both have potential benefits and huge potential drawbacks, and both are relatively easy to grasp compared to some thornier cases. One is purposely obscured due to a dictatorial regime; one is obscured not by administrative fiat but by copyright law and a very silly tradition in academic and quasi-academic research. Hiding results in paywalled “prestige” journals rather than making it available to the general public is seen as a mark of distinction (and avoiding a $30k-$50k open access charge is just sensible conservation of research funds, in this scenario. I know because I am an academic and I personally think OA charges are a waste, though for different reasons.)

Anyone who wants to [break the law] can obtain the primary documents in the former case via sci-hub. That’s just a fact. But is this any more reasonable than Georgia allowing companies to charge for access to state laws? Is it reasonable for people to pay twice, in most cases, first for the research* and then for the report, when faster dissemination of primary sources can directly inform debate over some important issues?

It seems silly that we would have a system where others are expected to pay for, digest, and opine upon research done by the primary authors. It would not be necessary if the norm was to disseminate AI breakthroughs in a format like distill.pub, where the whole point is to explain both the results and their context. But until academic and quasi-academic researchers can kick the “prestige” habit, that’s unlikely to happen, and the public will be left to listen to pundits (often idiots) rather than forming their own well-informed opinions, and that sucks.

No time to edit, but hopefully you get the idea. Disclaimer: I am an academic, applying “AI” to medical care for children with rare and deadly diseases; I used to work at GOOG a million years ago; and what I see among the worst hucksters sickens me, because I think it’s going to kill people in spite of better competing ideas that could improve the human condition. I’m just one person with one vote. I’d like everyone else to form their own opinions, because theay might change mine. Thus primary sources are critical.

* anyone who does not believe that taxpayers subsidize industrial & corporate r&d via tax credits and incentives is quite naive; it is not different from academic competition in that regard.


Negative by what metric? And how on earth is a computing scientist supposed to know the social consequences of his work, good or bad?

Doing this would just be feel-good virtue signaling nonsense that would reduce the overall quality of the literature.


"Societal consequences" are an emergent phenomenon. By definition, they are impossible to predict.

Often, the consequences of some new technology are both good and bad.


That’s not really part of the definition of emergent phenomenon. Yeah yeah, you can’t predict everything. But you don’t need to be a visionary or defeat the rules of complexity science to predict the NSA.


Some people already experience societal consequences and they have yet to be academically discovered.

That's the point of this article. Not future prediction of what could happen. What is happening, and is presently an unknown.


This way scientists can create a road map for ethically challenged CEOs and VCs. So, that they can get the most profit out of the technology.


So, just to be clear, the HN gestalt is that economists generally don't understand anything because "homo econimus" is a fiction and they don't understand the impacts of even relatively simple-to-characterize policy changes due to the complexity of a society made up of irrational actors, but it is reasonable to expect computer scientists to correctly predict the impact of their prospective research whose practical applications may not yet even be a gleam in anybody's eye, and decide on that basis what should and should not be researched and/or published and/or written?

The solution to this conundrum is obvious: Take the computer scientists who can make these amazing predictions and move them into the field of economics, where their predictive power can generally do more good for society. Then they won't invent anything harmful and can do trillions of dollars of good for the world economy, which is way better than all but the most black swan-y of research projects they could have come up with.

I'd also observe, HN gestalt, that you believe that pure research should be pursued because it is unpredictable where advances will end up being useful. This contradicts the idea that researchers can guess the societal impacts of their research. If researchers can successfully guess the societal impacts of their research, then people are justified in cutting pure research whenever they decide it's useless, because the judgment process that led to the conclusion "this is useless" is the exact same judgment process you're asking researchers to do here. Accepting that these predictions can be made will rather quickly result in the elimination of pure research. (Because, pretty much by definition, it looks useless for the forseeable future.)

If you want to decide this is a good idea and incorporate it into your worldview, you're going to have to give up several other things to get there. And have a frankly irrational view of the predictability of the universe, in my view.

You can't even really escape from this by saying "But it's so obvious in some cases that research is bad", because it really never is. Even germ warfare research can lead to valuable medical insights, and you really never know what valuable medical insights will lead to something that helps germ warfare. The idea that researchers can predict the future and should exercise some social responsibility by doing so is not some sort of major moral advancement; it's a childish regression from something we figured out decades ago, demonstrating our further scientific decay.

(The other inevitable objection is probably around nuclear bombs. Had the Manhatten Project never existed, there is no world in which we'd be sitting here in 2018 with no nuclear bombs. The only change would be who had them when. Nuclear bombs do not exist because of man's hubris and disregard for consequences. They exist because the laws of the universe permit them.)


> economists generally don't understand anything

I think this is taking hyperbole literally. The world is not at all completely unpredictable; it's highly susceptible to reason and science - in a way, that's the entire enterprise of the Enlightenment and its child, science. Predictions aren't perfect, but they are hardly useless. Your world is built on them; your alarm will go off; the train will be roughly on time and won't collide with other trains; investors invest based on economic and business predictions; software developers develop based on market predictions and predictions of user psychology; we can predict how well UI's will work, hardware failure rates, etc. The list goes on forever.

In particular, economics has provided, especially since WWII, a period of economic growth that is orders of magnitude beyond what humanity ever experienced before. Depressions used to be a regular occurrence; they have stopped. Systemic bank failures no longer happen, we know how money supply works, and the impacts of various policies much better than throwing darts at a graph. Recessions are not as bad; sustained growth is a norm, which is a miracle. The growth of the world economy, lifting billions out of poverty for the first time in history, is beyond all but the most visionary dreams of a century ago.

Do economists and economics get it wrong sometimes and to varying degrees? Absolutely. Unfortunately, there is nothing perfect in the world.


Irrational doesn't mean unpredictable. The criticism of "homo economicus" is that it leads people to miss foreseeable consequences.


“Should investors disclose negative social consequences?”


The fact that the article says that “oil and tobacco” are the Bad Industries (that CS shouldn’t become like) says everything you need to know. There is no CS without oil.


Isn't this exactly what Google Cloud, EC2, etc are? i.e., Google and Amazon selling access to their internal infrastructure, or something closely resembling it?


Yes.


I guess I can write down a list of wild-ass guesses of potential things that might happen if you use my software on the readme.md.


Phillip Rogaway gave an interesting invited talk [1] (and an accompanying essay [2]) on a related topic at a cryptography conference in 2015. It is quite an entertaining read, even if you are not a cryptographer.

[1] https://www.youtube.com/watch?v=F-XebcVSyJw

[2] http://web.cs.ucdavis.edu/~rogaway/papers/moral.html




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: