But, we cannot, and it's dangerous to think we can. After we develop technology, we will be surprised at how it is used and what the impact is, and we will have to respond to that, sometimes with more technology and sometimes with legislation and sometimes with changes in social mores. Thinking that developers can just anticipate in advance what will happen, is at best delusional, like thinking that if we spend more time on the specifications part of our next waterfall project, we can do better.
Moreover, if Tim Berners-Lee had specified in advance, "this world-wide web thing will probably eviscerate all advertising-supported media, from newspapers to magazines to radio and television", I don't know that we would have made wiser choices as a result of that.
This is incredibly false. Agile (or non-waterfall) style development is useful in specific limited cases. Where you're creating something where the cost of failure discovery is small. You're writing a mobile app, or a b2b website, or some other common problem. It's cheap and easy, and the cost of a failure is so small that it's cheaper to just fail and go from there, then to spend anytime thinking about how something may fail.
If you're doing something mission critical, space shuttle, something involving expensive parts, something high risk, you make sure you do thorough requirements analysis, well specified behavior, and well defined failure modes. You break the system down into individual small components (the more critical the failure the smaller the components) create well defined specifications for each component, test the he'll out of each one indicidually, ensure they work to spec and then combine them. You don't Agile your way through it.
Agile, you'd just get something minimal working end to end and iterate. If the thing you're working on can explode you don't just iterate through a minimal user story. You define each item well and ensure it meets it's specs and you understand it's failure modes.
Same thing for the topic at had on impact of technology. Understand the failure modes of the components you're creating - and write them in your paper.
I would much rather sit in a rocket that has blown up a 100 times on the launch pad but launched succesfully a thousand times more with each time being an opportunity to improve it, than one that has been engineered with waterfall and never flown...
As our technology is becoming so much more powerful and evolving at exponential rates, don't you think it might be a good idea to think a little harder about the how things could backfire?
These considerations should be taken into account when technology is used, sometimes by policy makers in government.
There's no way industry will police itself, they've proven time and again that profit is far more important that humans.
I'd rather not list out a bunch of evil ideas, but I think any engineer with a bit of thought can come up with plenty.
I think you're arguing the article's point. Many knowledgeable people can come up with dangerous things. Many people have bad intentions. Luckily for society, there often isn't much overlap of those two groups.
This paper is saying "Hey people who have ideas, don't just think through the technical idea you have. Think of how someone with negative intent might be able to leverage this idea to easily do something they couldn't do before. Or even how it could unintentionally happen."
Engineers are not philosophers and should not be placed in this role. We do not have the tools to do it. As with any human we have a duty to consider first order effects - but even the most astute philosopher will have trouble anticipating second order effects. This burden should not be placed on those least qualified to perform it.
Edit: Thank you to everyone for your thoughtful replies. I find this topic fascinating to discuss.
People who create something, who research something, should by definition have a better grasp on their topic than everyone else. To expect that they are more knowledgeable, and therefore better equipped to see what it means, and therefore also have a civic duty to explain this to less well-informed, not not patronizing. It's acknowledging their expertise.
> Engineers are not philosophers and should not be placed in this role.
This is an interview in Nature. Academics with the ambition to publish in academic journals, be they prestigious like Nature or Science, or smaller and more niche like <insert your favourite journal here>, are taking on a role of a philosopher.
And sure, engineers may "just" apply technology by comparison, but honestly I find that a very dismissive attitude as well. They are the "praxis" to academia's "theory", meaning that what they do actually has consequences in our lives. So if anything, their works comes with even more of a moral obligation to "Do The Right Thing" than that of academics.
They are experts in how the technology works. That doesn't make them an expert in how other humans will take advantage of what they have discovered/created. I've taken the charity of assuming the scientist or engineer themselves had non-evil aims.
> This is an interview in Nature. Academics with the ambition to publish in academic journals, be they prestigious like Nature or Science, or smaller and more niche like <insert your favourite journal here>, are taking on a role of a philosopher.
Yes, but a philosopher on which topic? A doctorate from an engineering background is not equally as educated as someone with degrees in history or other studies of human nature in regards to the societal effects of technology.
Humanity benefits when the broader impacts of technology are factored in. Who does this, with what influence, is arguably one of largest themes in human history. This particular proposal is just one of many options.
I don't find this article particularly compelling. Having a requirement for a social-impact section is a blunt instrument; it might become just a pro-forma exercise.
I think I would be more interested in encouraging (not requiring) journals with strong ethical foundations let authors opt-in to discuss social impacts. If they do, I would expect these social impacts to be vetted by social scientists, philosophers, ethicists, and so on -- areas where computer scientists often have large blind spots.
The Ph in PhD actually means something. It's not some kitschy decoration.
I'm sure that comes across as a dumb question, but, modernity - technology.
The irony of mathematics being naturally built in, reinforced into all of us, over and over. Computer science.
> From the ancient world, starting with Aristotle, to the 19th century, the term "natural philosophy" was the common term used to describe the practice of studying nature. It was in the 19th century that the concept of "science" received its modern shape with new titles emerging such as "biology" and "biologist", "physics" and "physicist" among other technical fields and titles; institutions and communities were founded, and unprecedented applications to and interactions with other aspects of society and culture occurred. Isaac Newton's book Philosophiae Naturalis Principia Mathematica (1687), whose title translates to "Mathematical Principles of Natural Philosophy", reflects the then-current use of the words "natural philosophy", akin to "systematic study of nature". Even in the 19th century, a treatise by Lord Kelvin and Peter Guthrie Tait, which helped define much of modern physics, was titled Treatise on Natural Philosophy (1867).
> "systematic study of nature"
And what do we do? What is nature to study? What is life to study?
Our own puzzled face, staring back at us.
"Real studying". Figure out what it means when you don't have anyone to design tests to check your own knowledge for you.
Bongs. Is it fun to insult people for thinking?
Real studying only means something when you have a teacher and an educational institution. Real world is much harder. You can watch your own understanding loop around itself hundreds of thousands of times. And still not know, do I know anything at all?
How many times do you have to observe something before you accept it as fact. 1 time? 2 times, perhaps once indirectly? 3? Infinite? How many ways can a truth be conveyed, and how many ways can you be made to interpret it so you know the difference between being entrusted with an awareness, or whether you've been manipulated into accepting it.
Socrates, aka "I know that I know nothing", had 3 books written about him. Aristophanes, The Clouds - he was a bumbling fool. Ever think about why he appeared that way to different sets of people?
Post-modern philosophers will quite correctly argue that you can no very little about things just by observing them, and in fact can never get close to complete knowledge that way.
Mechanical languages though, mechanical systems. Not particularly interested in quantum physics. That breaks things down so much to the point that everything can unfold into a somewhat distorted method of processing your own knowledge. Like trying to understand something from both yourself, and what is not yourself.
Mathematics. Logic. Those things can be accepted at face value. I think that's the closest one can get to complete knowledge. That's my opinion though.
Actual studying. I'd say it's next to impossible to have a complete understanding about anything. You can always think you have the full understanding, but that's just very specific to whatever test or goal post designed for you, or you design for yourself.
Bertrand Russell wrote 370 pages of a book to prove 1 + 1 = 2.
Is that coherent enough?
Are you referring to relativism vs. objectivsm? Or, that people can have the understanding of a concept, but lose it in trying to explain it?
I'm really trying in good faith you get you to make what I consider a coherent response. Personally, I've studied rhetoric, logic, philosophy (the moral kind), and science (the natural philosophy kind). I have a subtle enough knowledge of all of these to understand and appreciate nearly all points of view and have intelligent discussions with a wide range of people on a wide range of topics.
But, so far, nothing you said seems to contradict my statement that the P in PhD refers (with the exception of people getting PhDs in philosophy departments) to natural philosophy, which is to say, the quantitative, experiment and hypothesis based approach to understanding how the universe works, as opposed to moral philosophy, which is a primarily logic-oriented approach to abstract and concrete problems experienced by humans.
That those two intersect to some degree, and that some people who obtain PhDs in quantitative fields do spend some time being introspective about moral philosophy, but it's not commonly required in the form of coursework or research.
Perhaps the closest relationship between moral philsophy and natural (which is what we now call "science') is math, and it's still very much an undecided problem as to whether math and science are the same thing (https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.htm... and https://en.wikipedia.org/wiki/Mathematical_universe_hypothes... being the two most interesting statements on that problem) and whether the logic employed by philosphers is the same thing as mathematical logic. There is some useful prior writing on this, for example https://plato.stanford.edu/entries/philosophy-mathematics/
Ultimately though I just can't see what point you're making. I think you're saying all knowledge is relative, and even scientific/natural philosophy is identical to moral philosophy, rather than "all students of the natural sciences should also take classes in philosophy to develop a more intuitive understanding of epistemeology"
You kept insulting me. Why would I try to make a clear point?
> The Ph in PhD actually means something. It's not some kitschy decoration.
In Germany, the doctor's degrees are quite different. Typical ones for the HN audience are (for a long list cf. https://de.wikipedia.org/w/index.php?title=Liste_akademische... ):
- "Dr.-Ing." for engineering (typically including Computer Science)
- "Dr. rer. nat." (doctor rerum naturalium) for natural sciences (typically also including mathematics). Even though the university of Frankfurt am Main issues the "Dr. phil. nat." (doctor philosophiae naturalis) title instead, the "nat." ending for the latter one is very important to distinguish it from the "humanities doctor's degree" "Dr. phil." (doctor philosophiae)
TLDR: In Germany, the names of the doctor's degrees make it very clear that they have nothing to do with philosophy.
It literally looks like anyone can change knowledge, override it, mutate it, destroy it, falsify it.
I don't know why most people get PhDs these days, you are correct. But I know what it used to mean, and I hope it means that one day again, soon.
In the real world you have to jump through hoops no matter what you do. Ignorance to them is stupid.
We live in society, that doesn't make everyone a sociologist. It just means everyone who lives in society should at least devote some portion of their thinking to the contemplation of society, including institutions of education. People in academia are not omnipotently protected from the forces of society. If society doesn't value these institutions, they go away.
The Ph in PhD is important. It really is, and it's idiotic to let that distinction be viewed as something that's just, obsessed with a pedantic, tiny niche of no real consequence.
I've considered myself an autodidact for much of my life. I've also had a great education (MSc, only worked on PhD for one year and left school due to health complications). Comparing these two worlds is hard, but teaching yourself when you can be taught by others is the hard, long, and painful way to learn stuff. And you never know what you know when you only have your own knowledge to trust, checking against everything else, with no real stable, structured, foundational, 'stand on the shoulders of giants' mechanic for the discovery, pursuit, and application of truth / knowledge.
You can have some idea that the individual mind independently can be related to the entire body of academia like it's some metaphor for their functioning as an organizational entity, a super colony of minds that function like one - tested against the rest of the world - but that's oversimplified. Our own puzzled face, staring back at us. We are all part of one big system, but some stuff, distinction really does matter. I don't talk to many people who have PhDs that I know of in real life, just one. He's the only person who seems to really understand the absolute delicate fragility of, and the consequential burden - that comes with the discovery, creation, pursuit, and retention of knowledge, at least that I know of, in real life.
How does that relate to having an extra understanding of sociology, or philosophy? I honestly think my point is built into the experience I am relating, or at least I have to hope I can still convey that much, correctly.
Philosophy should be important to society, and it's a shame when it's not.
That's a little freaking ironic if you don't mind me saying. A PhD represents it's own authority by virtue of being awarded by the tiny fraction of people in the world who actually understand the problem space you might be studying enough to say "Research accepted. We trust you can take our place as educators and researchers in the future when we are dead."
Also, there's a good reason the Greeks had that archaic way of categorizing knowledge. Hyper-connectivity of all knowledge available at once is a total clusterfuck to deal with from an individual mind perspective. Just because it's old doesn't make it wrong. History repeats itself, etc.
At least in computing, this seems to have been largely false. Whatever my respect for the expertise of creators, their track record on predicting the future uses of their developments has been genuinely bad. I don't think that's an indictment of their skills, just an acknowledgement of how complex and rapidly-changing systems really work.
Understanding the future impact of ARPANET was impossible without also anticipating the rise of home computers, but the creators of ARPANET had no special expertise on that point. Computability experts and chip designers got predictions about topics from weather modeling to AI Go play totally wrong because everything from GPU computation to neural network advances have rendered their accurate domain knowledge misleading. And the track records of AI experts predicting the real-world role of AI is so infamous that I don't think I need to elaborate.
Honestly, I expect that this would be computer science's Modernist moment. Corbusier and friends were genuinely excellent architects, but the move from "designing on sound principles" to "trusting that we know all the future uses of buildings and designing for those" was disastrous. The result is a legacy of inflexible building design, hostile urban planning, and government-embraced block housing that caused real harm to the people it was inflicted on.
"A better grasp on their topic than everyone else" is presumably true, but it's not enough. Complex systems are often beyond what any one person can predict, and that only becomes more true when they're also rapidly changing from many directions. I don't think most creators in computer science can make predictions good enough to act on until their creations have been exposed to the world for a while.
First of all, I love me a good Le Corbusier bash, and I totally agree with you here. I also get where you're coming from.
However, what the article asks for is the equivalent to demanding that Le Corbusier would have to ask at every step "how could my design, supposedly built on sound principles, have negative consequences?" which isn't something I recall being a thing he used to do.
Of course it won't be enough, but one could argue that if the experts don't even bother to try, things will be even worse.
This is a good point, and I'm very much in favor of that. The man himself seems to have been much more fond of questions like "how can I force people to accept a version of this design that uses round numbers for elegance?"
I think the proposal made me jumpy because I have so many bad associations with asking technologists to explain the consequences of their work to others. It calls to mind things like The New Digital Age, which seemed totally devoid of introspection (and frankly stupid or even malicious in a lot of places) but was a huge hit with people making important choices about investment and regulation.
I'm still very hesitant about things like the mention of public funding boards using these assessments to shape their decisions, but I appreciate your point about preemptively looking for possible negative outcomes. Something like a medical attitude that considers failures and negative events separately could potentially make a lot of sense. We live in a world where the UK police are using facial-recognition software which provides 95% false positives, and it would be nice to at least have an academic framework for talking about how that's not just ineffective but actively bad.
Hold your horses, some of us engineers have philosophy degrees!
In all seriousness, engineers are more than capable of considering the impact of their creations. But can you predict future uses for technology with accuracy? Of course not.
I'm not sure how relevant it is for most work. People will always find ways to use technology to the detriment of humanity, but the overall effect of technology is beneficial. The internet is obviously for the benefit of humanity, yet there are plenty of people using it for evil.
The small set of engineers with philosophy degrees does not negate the parent's statement.
In any case, one doesn't need a philosophy degree to do philosophy, and many of the great engineers and scientists of history had a great deal to say with regard to the moral implications of their work.
Even today, engineers at many of the big tech companies are objecting to some work their companies are doing. Many others choose not to work for companies whose mission they disagree with.
Similarly tech shouldn't have to consider whether someone with unrelated goals will use their tech for unwanted aims.
If you write software for autonomous cars that is not foolproof at detecting pedestrians, you are culpable. In aviation, catastrophic failure means at least one death and the system should have an overall probability of failure at 10^-9. Critical software is frequently written at one or two orders below. That means 95%, 99%, 99.9%, are not enough.
If you write software that replaces someone's job are you "culpable"?
If you create a phone and people are killed when someone texts and drives are you "culpable"?
If you create a better facial recognition program as part of building a secure access systems and someone uses that system to discriminate are you "culpable"?
I find this unending process of creating "victims" just so they can be "saved" as tiresome.
If a thief comes and steals money under the mattress meant to pay for your kid's college tuition, and your kid is dropped because she can no longer pay, your kid's loss is a second order effect of the thief's theft, but I still hold the thief responsible*
* this is a hypothetical story; please don't save that kind of money under your mattress.
1. Did they have nefarious aims?
2. Did they reasonably attempt to consider and solve issues surrounding first and second-order effects of their creation?
3. How did they react to previously unknown first and second-order harms when they found out about them?
The first two, I tend to give creators the benefit of the doubt about. But I've become quite dismayed by the responses of creators to the harms of their creations.
Tell me, in the VW emissions scandal, did that happen ? No.
Germany literally changed the laws retroactively to avoid holding their own companies to account. This is technically illegal, but the judicial system REFUSED TO LOOK AT IT.
Given that both management of these firms, and government wants to commit fraud when culpability is assigned ... what point is there discussing the rules ? They won't be applied when it matters.
But of course there are large advantages to people believing they will be applied. Like there are advantages to people not responding to having their wallets taken from their pockets, their children kidnapped for sale into abuse, their houses robbed empty ...
This is the US government in action. Other governments, including European, are worse, not better. These are the people that make, and enforce the rules. Why are we discussing whether the right things will be happening ?
It is the second word, intent, that is the hang up. If a person hurts someone but it was not their intent to do so, that person is not typically to be held morally responsible. So, at least in the western concept of moral culpability, intent matters. But... people also have a responsibility, a responsibility to try to anticipate the consequences of their actions. One cannot shield oneself from culpability by intentionally remaining ignorant. I think that engineers avoid thinking about these things, not out of humility, but because its in their interest to do so.
> I think that engineers avoid thinking about these things, not out of humility, but because its in their interest to do so.
This is perhaps the reason we differ. My experience has been that the vast majority of technologists want to make the world a better place.
I believe you see the industry is filled more with people like Wernher von Braun who want to build something cool regardless of social cost. At the dawn of the social media revolution the dominate theory in tech circles was that information shall set you free. The people making it did not come into the industry for status, being a nerd was still a social negative. Today with the ever rising salaries things are changing.
Perhaps you are right that as the industry matures the sphere of liability should increase. I would still argue that the suggestions in the original article swing that much too far. We shouldn't prevent scientific discoveries merely because engineers may build something bad with the information. On the balance scientific discovery has improved the lives of everyone by many orders of magnitude - even if its distributed unevenly.
Again a first order effect
> If you write software for autonomous cars that is not foolproof at detecting pedestrians, you are culpable.
Again a first order effect connected directly with the immediate goal.
However if you write communications software and people use it to say bad things, those are second order effects. You provided communication software that works as advertised and has beneficial purposes. Others misused it.
Not trying to be inflammatory. This is specifically trying to find if you think there's a point where culpability disappears. I would appreciate a yes/no answer. There's no nuance to the question. I'm not talking about the specific valves, just the notion of a valve.
Did you mean first order or second order kind of like the way physics people mean it (direct linear relationship with strong effect on the unknown variable). Or the way that MBAs mean it (they have their own definition).
Or did you mean structural engineering, same as physics:
I think what you probably mean is a colloquial interpretation, where first order means something is directly causal in an obvious to see way, where second order is more of an indirect effect that occurs through complex dynamics.
Either way, I don't think those terms are well defined, and they're kind of complex enough that using them to argue for morality, and calling people out for not knowing your definitions, makes you look caddish.
I doubt that they could mean causally indirect. I think it must mean something like: not used in accordance to their the engineer's intent, especially when that intent covers a a general affordance (e.g. affords communication over long distances), but is being judged for a more specific affordance (affords communication over long distances to order a terror attack). I dunno. I'd have to be a philosopher to work it out.
The problem of where to draw those lines is so difficult that legal systems resort to constructing tests based on whether a "reasonable person" could have foreseen the specific consequence of an act.
If that civil engineer sold or allowed (for any reason from negligence, insufficient security protocols, malice, ignorance etc) blueprints for his design to end up in enemy hands, he most certainly would be held liable.
The thrust of that point is the very idea that software engineers on projects involving user data, regardless of how and what they think they are building, are not simply building “bridges”. They are building systems that have the potential to be misused as mass-surveillance apparatuses. And they need to build their systems accordingly, with appropriate data-security and with acknowledgement that their work could leave them personally liable to action should their work be found to be used as a threat-vector towards their users.
The current attitude of “well people know what they’re signing up for and since they signed the User Agreement, we don’t have any liability here” towards users in the industry is sick.
Saying that engineers need to make responsible decisions in their software design is one thing. But you imply that misuse is a liability for the designer? So if someone takes an engineer's code and modifies it for malicious intent the engineer is liable? That's absurd.
That's like saying the person who designed the airplanes used in 9/11 are partially liable for 9/11.
The person who makes a car that someone uses to kill someone by running a bystander over. The engineer isn't responsible.
The person who messed up the Prius with the issues where the gas pedal or whatever didn't work. The engineer IS responsible.
The first example is analogous to a software engineer writing image recognition software for a safety app. Then someone maliciously taking the software and using it to discriminate via misuse. The engineer is NOT responsible. If someone takes your software (could even be open source) and applies it or modifies it in a malicious way they are responsible not the engineer.
The second is example is where if an engineer makes a widget app and has a vulnerability but signs off on it, and then widget purchaser credit card info is stolen. The engineer is responsible for negligence and should be liable for the misuse.
Although, predictably enough, I followed up with more thesis length posts to clarify my point! Namedropping einstein too, who do I think I’m impressing? (But Einstein is relevant to this debate, although he made the crucial mistake of only informing the government and not informing the public. Obviously this was well-intentioned on his part and not malicious at all but it’s a lesson that every developer of new innovations needs to take onboard.)
This isn’t about specifically demonizing “engineers” or innovation. Or even to do with liability where “that engineer developed a car that murdered someone”. Liability laws are robust enough to figure out whether the gas pedal was functioning correctly, the driver is at fault, or if not, the engineer (or rather, the company he works for) is at fault.
This debate is about broader strokes: should developers of new technologies be ethically bound to inform both politicians & the public about their innovations, specifically so that, after a public debate on the potential negative consequences of that technology, the public can then demand their politicians to produce robust laws protecting the public from misuse of those technologies. (I say yes, I know it’s tough because I love just developing cool new shit too, but we developers have got to take a step back and realize the future implications of what we make. Data gathering, manipulation and retention is a ticking time bomb. And who knows what’s next.)
Another example is facial recognition, it’s depressingly funny to see experts who know so much about the technology proclaim it to be no big deal without understanding the ramifications of living in a society where your every movement is tracked and potentially monitored. That is a literal police state, and the US constitution has amendments made hundreds of years ago specifically to outlaw such actions to protect the public at large, both from private companies tracking their movements and to protect the public from the government itself. To have all of that progress in the name of “facial recognition will be awesome because you can login to your phone quicker” is not a good move because once that technology is in the wild, until laws catch up to robustly protect the public, who knows what will be done to the public at large with that technology. Ditto for “who cares about data retention and website tracking, it makes marketing so much easier!” and “don’t worry about your DNA being stored and sold by ancestry sites indefinitely because it’s so cool to know that you’ve got a 7th cousin living on a different continent”.
Funny how you believe that. Remember the Miami bridge collapse ?
Workers on the ground misplaced one of the supports, and then didn't stop work when loud noises started coming out of the concrete. This was not documented wrong on the design, it was those same companies first mismeasuring the location of the supports, then simply moving them around when they didn't fit.
Who's culpable ? The designers, of course.
Or take Dieselgate. European, German and French car manufacturers systematically and purposefully installed software in the cars to commit fraud on the testers (the government testers knew that this was happening).
Who's culpable ? A single engineer, of course.
Not the regulators that knew what was happening.
Not the European investment bank, who had seen reports on this.
Not Volkswagen management, that had built several facilities explicitly for committing this fraud.
Let's stop pretending the problem with engineering will be solved by assigning blame to engineers. It won't. It can't.
Large scale fraud, committed by management and government is the problem. It cannot be fixed with engineering.
"In my first ever AI class, we learned about how a system had been developed to automate something that had previously been a person’s job. Everyone said, “Isn’t this amazing?” — but I was concerned about who did these jobs. It stuck with me that no one else’s ears perked up at the significant downside to this very cool invention."
"Then, the gatekeepers are the press, and it’s up to them to ask what the negative impacts of the technology are."
Building an entire society of victims and allowing the hysterical press, who are in the business of feeding a hysterical public sounds like a formula for paralysis.
Automotive engineers willfully destroyed opportunity for stablehands, whip manufacturers, and train conductors out of hundreds of other occupations annihilated.
Civil engineers with their oriented strand board trusses, reinforced concrete, and dimensional lumber stole the livelihoods of masons, carpenters, foresters and so many more.
Engineers have centuries of unethical practice to atone for. They need to stop their work because everything that they do is harmful and unethical!
I absolutely disagree. That is why I specifically mentioned culpability for first order effects. If you build a gun for someone you know to be violent you have culpability for its use. What I disagree with is the broadening of this "liability" to include second order effects. That we are now supposed to predict chaotic systems and know in advance all perverse outcomes - and that we should have moral liability should we fail in this task.
You've committed to the position that the engineer who built the bridge is morally culpable for that domestic violence.
A man-made dam might increase a city's availabe water supply and allow power generation, but it also can increase CO2 emissions and do harm to salmon reproduction. Increased CO2 emissions and detriments to salmon reproductions might contribute to global warming in the first case and an increased price on fresh fish and a declining seal population in the second. Increased price of fresh fish mean less availability to poor consumers and the loss of what might be an otherwise valuable part of diet. Global warming means climage change, sea level rise, an expected increase in unrest, and large scale ecological instability. All those things mean that my daughters may not enjoy the relatively secure life that my generation has led...
The modern trend both in law and society of statutory offenses and ex post facto "something bad happened so let's see what we can charge him with" is something I find deeply disturbing. I find the original articles proposals as yet another step upon that road.
Our forefathers accepted the universe was unpredictable. We seem to feel we have enough grasp now that if bad things happen someone must be to blame.
I think that handles can be provocative, and some are certainly chosen to do psychological harm (e.g. to racial minorities). If I had known you, and known that you would be triggered by this name, and that the harm was substantial, I would have some culpability, especially if I chose it with the intent of triggering you. But I didn't know you, and I didn't know that you would be triggered, and I did not intend to trigger you, and I had no way of knowing it would trigger anyone, nor a reasonable expectation that it would. And so there is really no culpability.
In the case of a scientist - the ultimate effects of their discovery will always be second order as they are not the engineers applying the science.
Here are two rebuttals to the above statement.
1. At the level of projects it's asking engineers to think a bit more broadly about solutions. For instance, when designing a user comment service for social media you can trade-off between designs that encourage informed debate vs. ranting flame wars. In many cases both designs have similar implementation costs and meet business requirements, so why not pick the one that is more socially responsible? As these choices become better understood they should just be part of user experience design.
2.) To develop the trade-offs we should address more research to social consequences of technology and take the results seriously. Great engineers are philosophers or at least more broad-minded about the interactions between technology and society than some of the current leaders in Silicon Valley. Andy Grove, David Hewlett, Bill Packard, and Bill Gates were/are not just problem solvers but examples of engineers who have thought about how to benefit society as a whole.
Which is how we wind up with companies hell-bent on vacuuming up as much data as possible and selling it to anyone who asks. Perhaps if some software developers (or, at this point, some legislators) stopped to think about the ethical considerations of the tech industry’s actions, we all wouldn’t have to have conversations about how other countries can buy elections on the same platform where you share pictures of your food.
Tech was never the issue here - people were. The drive to scapegoat the upstarts should be regarded with great suspicion. While there may be questionable ethics involved with companies it looks to be a classic attempt to scapegoat the troublesome for matters that are peripheral at best and any action they could have taken would have dissatisfied people like how both the left and right are convinced Facebook is biased against them because they don't reflect the world as they wish to see it.
For instance, machine learning is now widely used for surveillance and advertising in ways that violate basic presumptions about privacy rights. Thing is, I imagine that if, say, ML researchers had gone on strike over privacy issues somehow, society would have essentially told them to shut up and get back to work. Sure, the same larger society now wants to blame ML for the constriction of privacy and for extensive state and private surveillance networks, but it was that very society which cheered the construction of these exact surveillance networks "becuz terrorizm" and for the sake of "free speech" (for billionaire multinational corporations) and for other awful reasons.
The people who were skeptical or critical over this whole social trend knew damn well it was all a bad idea, and we said so. Society deserves to get what it voted for, with both its ballot box and its dollars, and on some level, it deserves to get it good and hard without scapegoating the hired help, when it was them in the first place who told us skeptical, critical engineers and scientists to shut up and produce what society wanted from us.
"For instance, machine learning is now widely used for surveillance and advertising in ways that violate basic presumptions about privacy rights."
Those same mathematical techniques are also being used to more accurately detect cancer in MRI scans. Saying everyone needs to be held responsible for second order effects is unrealistic and ultimately damaging to society. Blaming the toolmaker for things they cannot control will only cause no tools to be made.
To me it points to a larger issue in our society that we are on a trend of completely removing context. Context no longer matters, what matters are the effects, regardless of whether you can reasonably be held accountable for them or not. It is thought-crime induced by those who in the same breath rail against that totalitarian behavior and then skip to implementing a version of it themselves.
A number of people on this thread immediately jump to the argument on surveillance and data privacy which is a useful discussion but tangential to what is being proposed in the original article. The article is moral hand-wringing with zero clarity or practicality. And the judge of who is culpable or not is left to a hysterical press, driven by a hysterical and sometimes uninformed public.
Look at what this behavior has done to political (and personal) discourse. Now imagine it being applied to science. No thank you.
As a matter of fact, my work covers fMRI scans of brains for a neuroscience lab, so I'm entirely with you here, but again, with a small caveat:
If society wanted the "toolmakers" (us) to take responsibility for which tools we make and how others use those tools, they would have to give us the actual power to decide what we work on and for whom. Instead the rule has usually been, "if you have ethical qualms about this, we'll find someone who doesn't". All exit and no voice means a society where nobody takes responsibility because everyone self-selects into positions where they can agree with what they're doing, and if broad masses and the hysterical press want to excoriate those doings while also materially demanding they continue, it gets to do so.
It's like when people say that engineers should refuse to work on weapons that would violate the Geneva Conventions -- but the same people then vote for a government that moves funding from the NSF to the Defense Department!
It's about raising the standards because we can. Standard publication sections: intro, material and methods, results, discussion, conclusion. It feels like you can add a "speculative impact" section somewhere between discussion and conclusion.
Socrates, From Plato's dialogue Phaedrus 14, 274c-275b
All you have to do is read the history of engineers and scientists who worked on nukes or rocketry. The ones that tried failed miserably. Understanding why is crucial in understanding the limitations of domain experts.
Science is about understanding. Engineering is about using that understanding for building.
Of course they are, but this article is very specifically about the role of the scientist. Not the engineer.
Will negative disclosure cause problems for engineers? I don't know. What I do know, is it isn't fair to place that burden of knowledge entirely on scientists while tying their hands behind their back with respect to open discussion in a professional setting. Academic publishing is intended to be open without fear of professional and personal repercussion for increasing factual awareness. Get rid of that and, well, sit back, relax, and I hope we all can enjoy the apocalypse of knowledge, because that's where stuff is headed without the ability to professionally discuss existing problems.
> Nobody would be discussing the moral implications of scientific discoveries if it didn't ultimately lead to machines that cause effect in the real world.
That's just factually wrong. Mathematics. Philosophers. Logicians. All the living and dead theorists who built the abstract foundations essential for the functioning of our machines. People who deal entirely in the world of the abstract. You don't develop a passion for abstraction because you want something out of reality. You develop a passion for abstraction because reality is sometimes terrible.
We don't have moral discussions about the discovery of a new law of physics or a mathematical lemma. This only becomes an issue when we can see how it can be applied to some aim. Thus if there were no engineers nobody would consider the scientists moral obligations with the exception of their experiments themselves causing harm.
The argument to limiting science is that engineers might take it and create some terrible machine with it. Perhaps the most notorious example being a nuclear bomb.
I believe we agree on the role of science, I don't think we agree on what their culpability should be.
Abstraction affects mental functionality. It forms expectations by designing a language in which the present can be expected to be predicted within some objective, measurable tolerance. The problem, always has been, and always will be - is that people can't be measured and tested like machines.
> This only becomes an issue when we can see how it can be applied to some aim.
So yes. That's correct, that's the terrible thing that can be observed and measured when it happens. Prevention is important.
> Thus if there were no engineers nobody would consider the scientists moral obligations with the exception of their experiments themselves causing harm.
Sorry, no, that again, ridiculous. You think that way. Not everyone thinks the same. Intent. Experimentation is intended to discover in a controlled environment while giving heavy weight to moral implications. People choose to opt in to scientific studies. People trust that process when they make that decision.
> The argument to limiting science is that engineers might take it and create some terrible machine with it. Perhaps the most notorious example being a nuclear bomb.
Yes, and what makes us think we are so evolved and so advanced to believe we can't accidentally create something similar with present technology - lacking awareness to it's creation? Evolution won't protect us against our own idiocy.
Reasoning about computation affects cognition, which has real world consequences that extend to everything else that is not reasoning about computation. That's just all there is to it.
> I believe we agree on the role of science, I don't think we agree on what their culpability should be.
They should be able to talk about personal, first hand experiences openly without being professionally attacked, first. Second, they should be able to talk about existing problems without fear of whose bottom line it might affect. There should not be blame in systems where everything is hard as heck to understand. Obligation to the truth - yes. Blame for the belief that there was the intent to create a problem by virtue of uncovering one? No. That has shortsightedness written all over it.
> The idea is not to try to predict the future, but, on the basis of the literature, to identify the expected side effects or unintended uses of this technology.
> No, we’re not saying they should reject a paper with extensive negative impacts — just that all negative impacts should be disclosed.
> But we’re moving towards a more iterative, dialogue-based process of review, and reviewers would need to cite rigorous reasons for their concerns, so I don’t think that should be much of a worry. If a few papers get rejected and resubmitted six months later and, as a result, our field has an arc of innovation towards positive impact, then I’m not too worried. Another critique was that it’s so hard to predict impacts that we shouldn’t even try. We all agree it’s hard and that we’re going to miss tonnes of them, but even if we catch just 1% or 5%, it’s worth it.
> We need to be saying, based on existing evidence, what is the confidence that a given innovation will have a side effect? And if it’s above a certain threshold, we need to talk about it.
> We believe that in most cases, no changes are necessary for any peer reviewers to adopt our recommendations — it is already in their existing mandate to ensure intellectual rigour in all parts of the paper. It’s just that this aspect of the mandate dramatically underused.
> Then, the gatekeepers are the press, and it’s up to them to ask what the negative impacts of the technology are.
> Disclosing negative impacts is not just an end in itself, but a public statement of new problems that need to be solved. We need to bend the incentives in computer science towards making the net impact of innovations positive. When we retire will we tell our grandchildren, like those in the oil and gas industry: “We were just developing products and doing what we were told”? Or can we be the generation that finally took the reins on computing innovation and guided it towards positive impact?
I haven't read this article (https://acm-fca.org/2018/03/29/negativeimpacts/) in entirety but I agree with the bolded points. Only viewed bolded points up to the examples.
Edit: I want to say thank you for your respectful tone in spite of my slightly aggressive(?) one. I'm grateful to people like you because it reminds me, it matters a lot.
What will you do with this information?
So at best its useless, and at worst it serves to fuel the imaginations of those who would use it for evil.
> our field has an arc of innovation towards positive impact
How would this change the course of innovation? The only way I can see is if its used to censor.
Why is a specialist in a specific technology qualified to determine negative societal outcomes of it? Human societies are extremely complex systems and even those specialized in it have poor predictive track records.
Why would the press have this power? They've never had such substantial control before. Governments have been able to suppress independent nuclear research in the most extreme case but the laws surrounding that are not exactly proven. If we barely allow our government this power why should we give it wholesale to an unelected press?
> What will you do with this information?
If I was in the position of a researcher, I would talk to people it affects directly or try to communicate my awareness to people who are in a similar interdisciplinary dilemma who can then communicate their awareness to people who are 'on the ground' such that the problems both researchers have awareness of can be identified and then resolved in real life. The point is progress, one direction that moves forwards, not backwards or cycles. Obviously this can be a state of mind one can choose to see or ignore - see the cycles only or notice the differences with positive improvement universally in mind. But I'm one person, stuff like this will always be is a group effort.
> So at best its useless, and at worst it serves to fuel the imaginations of those who would use it for evil.
Group effort, squish evil. Evil is not something I know how to identify though, that's always been the dilemma. The real evil seems to be the thing that actually completely extinguishes your existence entirely, death(?).
> How would this change the course of innovation? The only way I can see is if its used to censor.
I don't think the intent is to censor, I think the intent is to alleviate existing censoring of problems people are aware of but have no way of speaking about.
> Why is a specialist in a specific technology qualified to determine negative societal outcomes of it? Human societies are extremely complex systems and even those specialized in it have poor predictive track records.
Maybe they have some first hand experience that they feel they would be embarrassed, ashamed, or rejected from disclosing that information because the association of being personally identified with a given experience is just something not accepted, presently, professionally. Relegated to therapy, even though, there might be actual problems people have some awareness of. Fear of being implicated as being the cause rather than the solution.
> Why would the press have this power? They've never had such substantial control before. Governments have been able to suppress independent nuclear research in the most extreme case but the laws surrounding that are not exactly proven. If we barely allow our government this power why should we give it wholesale to an unelected press?
It's not power. It's trust. It's saying, researcher talking to reporter, 'hey if we make mistakes, point them out, because we won't always be able to see every error we make, especially when we must work in a system that requires minimal error to be seen as a function of our own approval mechanic.'. Errors in knowledge systems that function off of principles that are resonant of their own structure - that leads to destabilization of structure. There has to be some tolerance for making error, that's give and take, you see me make errors, I don't see myself making them, point them out. That's the trust given to the press. Speak up.
> If we barely allow our government this power why should we give it wholesale to an unelected press?
Bodies that self regulate must regulate one another. Checks and balances.
If a person can predict the consequences of design decisions well enough to get paid for it, they'll probably have some idea of potential negative consequences as well.
> programmers should not try to determine how society will react to technology
That is exactly one of the most important parts of our job.
Do you think doctors should not think about the wider impact of treatment decisions? Legislators should not think about the second-order effects of the laws they pass?
Yes, prediction is difficult (especially about the future, as Yogi Berra said). We all know this. Most of the things that most programmers are doing have known and predictable effects, because most of what we are doing is not new. If we had no idea how society would react to technology, why would we even develop it? Of course thinking about the effect that your work has on the world is a basic responsibility of any human being, how it could possibly be otherwise?
I agree completely that it's hard or impossible to just predict in general how society will react to technology. But in many cases, there are people designing systems specifically for the purpose of getting people to behave a certain way. This implicitly acknowledges that they believe that they can affect behavior in a predictable way. If they can make predictions one way and be correct enough to profit from it, they should be capable of making predictions about things that could go wrong.
I agree completely that predicting something like what Facebook will look like 10 years in the future is impossible, and there's surely a lot of research where the possible consequences are infinite. A lot of results in CS research are like someone developing a new material - "Like, sure, this could be used to make a new, extra deadly tank or something, but it could be used for literally anything." In that sense, it's not useful to make predictions, but I think there's also a lot of research going on at Facebook and Google that looks closer to what I mentioned above, where they absolutely have an intention to change people's behavior and so probably also have the ability to predict the consequences of their complete success.
It is not useful to expect good predictions from the former group, and the later group isn't really interested in publishing in CS.
This is still really difficult, and quite ethically fraught.
When I was in high school and college (late 90s & early 2000s), I was one of the techno-utopians who believed that the Internet would usher in a new era of censorship-resistant free speech, free information, and free exchange of ideas with people across the globe, regardless of whatever walk of life you happened to come from. It would give a voice to marginalized people regardless of how niche their views are. By and large, it succeeded in this, and this site (HackerNews) is an example of that. I have no idea who you are or where you come from, and it could be literally half a world away from me, but I can still post this comment, and probably hundreds to thousands of people will read it.
Yet nowhere on our radar screen was the idea that maybe we wished some of those marginalized groups would stay marginalized. That in addition to giving a voice to dissidents and hobbyists and nerds and subcultures, we would also give a voice to racists and militants and pedophiles and drug traffickers. Why would we even consider them? The definition of "marginalized" involves being "out of sight of the mainstream". Yet now, one of the main complaints being levied against Silicon Valley companies is that they aren't doing enough to police hate speech or kick undesirable groups off their platforms.
And then this is hugely ethically fraught, because I'm sure that from the perspective of people in those groups, I'm the evil powermonger, and they are just exercising their human right to believe what they want. Silicon Valley has remained largely libertarian in outlook, but what if people get their wish and it starts cracking down on undesirables? Who's to say that your group would actually be one of the desirables? The new boss starts to look an awful lot like the old boss.
That this phenomenon came about through the heretofore unprecedented (sarcasm) exploitation of human credulity and vanity, that dopamine spikes are the very mechanism of addiction, and that we react more strongly to things that trip our amygdalae than our cortices, should surprise precisely no-one — least of all the people who were building tools predicated upon those things.
None of this stuff is new. None of this stuff is news. You don't get to play ignorant of the implications of the things you're building. Not while you're selling your time on the basis of how smart you are, anyway.
I don't agree. I think this causal relationship makes sense in hindsight, but I don't think most people capable of designing or implementing software can reliably extrapolate what its impact on society will be. That goes for both positive and negative impacts.
Researchers and engineers should still put the effort into questioning the nature of their work and its impact on society, because that's a good exercise in general. But I'm skeptical any given individual would be good at arriving to the right answer.
Facebook, Google, and some game companies definitely do hire lots of human-computer interaction Ph.D.s who definitely are trained in exactly this sort of thing.
There's a reason the title is "computer science researches" and not "computer programmers".
This is an interesting point, a whole lot of companies do actually hire soft-sciences talent specifically for that purpose, has been quite common in the gaming sector for years now (Blizzard, Valve, Bungie, just from the top of my head).
I'm reminded of the Tom Lehrer lyric:
Once the rockets are up
Who cares where they come down
That's not my department
Says Wernher von Braun
It's obvious that a missile can be used to send bombs across the world just as well as it can be used to send satellites into space.
Minecraft seems like a great form of entertainment that can be used as an educational tool. The fact that it might also be destructive to the development of children is non-obvious and takes some expertise outside the domain of software engineering to uncover.
You didn't design the mechanism, but you know what it does. Are your hands clean if you write that code?
Being able to hypothesize about potential risks and being able to perform experiments to verify the hypothesis are totally different skills.
You can guess that maybe your game will be unhealthy for teenagers, but constructing an experiment to verify this is a whole different ball game.
I'm reminded of back when I worked in agency land, and clients would ask us to build something to "go viral" back when going viral was a new concept. They assumed it was as easy as just saying "make it so".
Of course, it's never that easy to know that it will catch on.
How exactly do you make something that is fun, engaging, and rewarding enough to encourage regular use, but not fun, engaging or rewarding enough to abuse? Seriously, if you can answer that question, you're sitting on a gold mine of knowledge.
Firstly, this is for researchers/academics who are publishing in journals, disseminating ideas for new technology.
Secondly, the suggestion also clearly states that the work will be reviewed NOT by judging the utility in the possible consequences, but for the thoroughness of the analysis. In practice, that means that the paper's authors can not be more sloppy than the reviewer (a peer in the field who can be assumed to have a roughly similar understanding of the technology). Reviewer feedback can point out any blind spots of the paper. That strikes me as a reasonable and concrete setup unlike the FUD in these discussions.
Nobody is being asked to balance the positives against the negatives. They are simply requested to initiate a conversation about the possible negatives so that other researchers building on the idea can work towards improving those aspects!
(EDIT: Most papers/presentations already talk about the possible positive impacts -- nobody holds back on that for lack of expertise. It is only fair to ask authors to be less partisan and more thorough, as responsible researchers.)
Whether all engineers/programmers should take analogous responsibility in trying to anticipate the consequences of their work is a related, but different question, with different trade-offs involved -- because engineers are concerned more with building things and delivering practical benefits, while research (and academic scholarship) is typically more interested in a longer term view of things. Both are important but distinct questions. Please don't conflate the two points in this discussion.
> In my first ever AI class, we learned about how a system had been developed to automate something that had previously been a person’s job. Everyone said, “Isn’t this amazing?” — but I was concerned about who did these jobs. It stuck with me that no one else’s ears perked up at the significant downside to this very cool invention.
Statements like that are frightening to me. It’s incredibly hard to define “negative“ in a reasonable way, and I guarantee you that someone else is going to define it in a way you don’t like.
What’s the negative impact of the recent discovery of a non-quantum recommendation algorithm on the field of quantum computing? Should the young researcher have spent time considering and detailing that?
If we consider the goal of research to improve society, then yes, I think it's reasonable to expect a researcher to spend some time considering whether what they're doing is going to actually improve society. In biology, most funding agencies require you to take a brief (~8-10 hours of class time) course on the responsible conduct of research, even if you aren't doing any human subjects research.
In the specific case you gave of the non-quantum recommendation algorithm, the negative impacts should be more or less the same as the impacts of the quantum algorithm, but with less of a barrier to entry. And even the person proposing this acknowledges: "We all agree it’s hard and that we’re going to miss tonnes of them, but even if we catch just 1% or 5%, it’s worth it."
But why must we burden the theorists with ethics, and not the programmers or companies who bring that theory into the real world?
My dad is out of a job now, too, but that’s because he got hurt at work, multiple times. He has had back issues my entire life because he got hurt at work.
The person proposing a consideration of negative consequences was “inspired” by jobs being automated away. Whether a researcher has considered the negative consequences enough will be determined by peer reviewers who have their own political leanings and potential agendas. In the end, I would expect the process to become completely subverted, without accomplishing much along the way.
This isn't just a matter of knowledge and familiarity. It requires some intensely personal moral and ethical judgments. One person's moral evil (self-driving trucks putting truckers out of work) might be another's moral obligation (getting rid of the life-shortening occupational hazards of trucking).
The verbiage around fully disclosing the possible outcomes adds an extra wrinkle. Reasonable results require social and technological forecasting and weighing of what researchers thing is likely or reasonable. It's possible that the researchers we're discussing might not have extensive training in forecasting large-scale social behavior.
I would submit that it's worth this effort to catch some small percent of potential negative impacts only if we can do so without excessive costs from false positives. A scenario in which an excess of false positives overwhelms the number of true positives at great cost is a very possible outcome and one that merits more than casual dismissal. If we can't predict outcomes reliably and can't agree on how to weigh them, then some might consider it wise to not predicate significant and consequential decisions based on a combination thereof.
You're absolutely right. Researchers should be taking a look at the long-term consequences of their work. It's just possible that researchers should not assume that they're equipped to make long-term predictions or shape the efforts of their peers around said predictions.
Hecht's heart - and yours - are absolutely in the right place. We can, and should, think carefully and deeply about the consequences of all of our choices with compassion and humility.
Absolutely. We shouldn't expect the authors of one paper to decide on the utility of various consequences and alternatives. Also, those consequences don't exist in a vacuum, and depend on other assumptions. Eg: Automating jobs is possibly an ethical positive provided people don't need jobs to survive. And this provides an opportunity to kick off other lines of research and inquiry.
> ...excessive costs from false positives...
Could someone give concrete examples of the possible costs of "false positives"? In the picture I have in mind, papers are not accepted/rejected based on whether someone likes the consequences or not. (as stated in the article)
> It's just possible that researchers should not assume that they're equipped to make long-term predictions or shape the efforts of their peers around said predictions.
I think excuses based on insufficient expertise is a cop-out. There is no social science specialization with predictive models such things. The authors are expected to have the most insight into the specific aspects they are writing on, and generally educated well enough to be a functioning citizen of modern society. If they can vote, they can damn well put some thought into possible consequences.
The way I see it, there are two goals underlying this suggestion:
1. Get a better understanding of the possible consequences of different lines of advancement, and, as a community, encourage the "positive" ones while discouraging the "negative" ones. (for some value of positive/negative, which might well be emergent only after much back-and-forth)
2. Get each and every researcher to actively think about the ethics of their research and the problems they choose to solve.
Most commenters are focusing on only the first one; I think the second is equally important. If the focus was mainly on the first problem, we might choose to have editors writing editorials every few months on recent advancements in some topic. Or a conference panel discussing the same. Those will help towards the first goal, but only partially towards the second goal, since most researchers could "outsource" ethics to others.
Or maybe automating certain jobs is an ethical positive because means fewer people die in coal pits. You're very right that this is an excellent opportunity to kick off other lines of research and inquiry. It's possible that what is essentially a journal of applied mathematics might not be an ideal venue for such, given that journals of ethics and economics already exist.
> Could someone give concrete examples of the possible costs of "false positives"? In the picture I have in mind, papers are not accepted/rejected based on whether someone likes the consequences or not. (as stated in the article)
What would be the point of a negative social consequences disclosure requirement if there were no consequences whatsoever for leaving one off or being glaringly silly in the disclosure? The point of this proposed process is that it is ground to reject a paper and require further revision.
To put it another way: the process proposed will either have false positives or it will do nothing. The extent to which it has false positives that exact a cost will be determined by the extent to which the process works as intended to discourage research deemed to have negative social consequences.
Rejecting a paper might end a grad student's career. It might prevent disclosure of a significant finding in time to have a major impact on something important. We have no way of anticipating the consequences of this. Perhaps this proposal should come with a discussion of the potential negative social consequences of it, given that some can already be reasonably foreseen.
Given how abstract that is, I shall offer a more concrete example. My past involves research into how to make circuits more efficient and easily produced reliably and at scale. At no point in this process did anyone stop and think about if this might have applications in computers used as part of weapons systems. Which, looking back, is obviously one application of computers.
> The authors are expected to have the most insight into the specific aspects they are writing on, and generally educated well enough to be a functioning citizen of modern society.
You're absolutely right. The authors can be expected to have deep and significant insight into the technical or mathematical aspects of their work. It's possible that this might not be the same as having the most insight into economic or social consequences of their work.
> 1. Get a better understanding of the possible consequences of different lines of advancement, and, as a community, encourage the "positive" ones while discouraging the "negative" ones. (for some value of positive/negative, which might well be emergent only after much back-and-forth)
This is a laudable goal! And it's perhaps worth considering if there might be unintentional and perhaps "negative" side effects to such a process.
> 2. Get each and every researcher to actively think about the ethics of their research and the problems they choose to solve.
If this is the goal, then perhaps the requirement should be for every paper to be accompanied with an ethical statement which is in no way part of the review process. That would satisfy your needs without risking significant unintentional side-effects, would it not?
> If this is the goal, then perhaps the requirement should be for every paper to be accompanied with an ethical statement which is in no way part of the review process. That would satisfy your needs without risking significant unintentional side-effects, would it not?
How do we stop this ethics statement from being vacuous, unless it is reviewed? It might quickly devolve along the lines of TOS/EULA (legalese to include all edge cases) or reduce to something trivial like: No fingers were harmed while typing up this paper.
That takes a non-zero amount of time, and thus funding. So now we've got a system where disagreeing about someone's forecasting of potential social consequences or their decisions around ethics imposes costs on mathematics.
Even processes intentionally and explicitly designed to be iterative have an unfortunate tendency to stop iterating as conditions change. This adds one more reason for papers to not reach publication.
> If they disagree, and the author is keen on not mentioning those concerns even as worth considering, but the paper is otherwise good, we could evolve a system whereby the reviewer comment is published along with the paper.
That sounds like a good idea. In fact, why not just skip the review portion of it and publish reviewer's comments on a paper's ethics along side it every single time? It would seem to get to the same point more quickly!
> How do we stop this ethics statement from being vacuous, unless it is reviewed?
Here we have the crux of it: the whole point of this proposal is for there to be real consequences attached. Which brings us back to my point about false positives and side effects.
I suspect that in most cases, the statements would correctly be vacuous. Many - probably most - papers in computer science have minimal to no obvious or reasonably forseeable social implications.
That's not really accurate. Sure, people talk about positive impacts - but in a technical CS paper, they talk about positive technical aspects, e.g. "this new method will allow you to train ML models faster". They don't usually talk about societal impacts, which is what the article is about.
The implicit assumption is the engineers will use it to make something bad. The scientists must anticipate what engineers will do and document potential negative actions. At worst its useless boilerplate, at best its advertising all the negative uses. Science and Engineering are inextricably linked when it comes to impacts on society.
The next step of course is to stop research if its determined someone can do something bad enough with it. Why else would you go through the effort of documenting it?
I would posit that there was no good way to reasonably forecast it at that time. This isn't an irrelevant point, though it will seem that way at first blush. The point is that it's difficult to forecast significantly in advance the consequences of technological advancement. That the people asked to do the forecasting are not trained in analyzing social trends does not make this task easier.
And there are books written in English that talk to the social history of the species. History, physics, etc, being subsets of English syntax (or other "human languages") tailored to describe specific semantics.
A casual look at what happened over the years as various social bubbles suddenly slammed into each other; Columbus and Americas, repeated conquering of Europe and further east.
Doesn't seem a stretch to think "information network could be used for malicious shit." And it has been for years, depending on who talk to (content piracy, trafficking, AI guided weapons, etc) before Facebook. So there's a plain history in contemporary times too. No need to dig that far back into history.
And it's not advocating for binning the abstract research, or avoiding making these things concretely. It advocates for fostering a conversation about the potential pitfalls.
Like programmers do before considering a change.
Deep knowledge of the specifics isn't necessary, IMO. We code heads are already aware of "change in system may have bad side effects. Let's avoid it as much as possible." Emotional consideration is already there.
This thread is devolving into childish attitudes about having to be part of society. Division of labor taken to its extreme, leads people to just disregard they are a part of a bigger picture (society). What did Adam Smith have to say about such things? All the way back in the 1700s. And he had no where near modern formal training. Seems rather spot on here.
Is it possible that this might not have been the planned result of TCP/IP, however? At the time there was just one social bubble and the point was to enable two computers already in it to connect in a more standardized manner. The notion of using the system for malice largely didn't occur, because at that point there was no reasonable way to gain from doing so. People weren't trying to engineer globe-spanning society-merging information networks.
But never mind that. Reasonable people can disagree on questions like the accuracy of our ability to correctly predict the future from 45 years away and counteract those potential distant negative consequences.
Moving on to what I consider the more interesting point: the process proposed. It's pretty clear that this isn't about fostering a conversation. You do not need enforcement mechanisms to foster a conversation. Enforcement abilities - like rejecting papers, which always comes with the risk of killing a paper forever - are about coercion. Maybe there's another way, like including ethical statements outside review processes, with papers that could both foster conversation and not rely on coercion?
You're right again! Fostering conversation is immensely valuable. We are all part of a vulnerable and critically important society. It is our moral obligation to think about that as we look at our work and discuss it with our peers while also being humble and aware of our own limitations. We can, should, and must do this absolutely critical work. The future depends upon it. We are called to the work.
The people behind http://interactions.acm.org/ would say YES. I bet most of academia would say yes.
Then ask them if they understand just how much data Google & Facebook has on their browsing history via cookies, third-party trackers, adboxes & share buttons.
Then ask them if they know that, if they did an ancestry check, their DNA data may soon be shared with their insurance provider?
Then ask them if they know that their credit history was probably leaked to the public via the Equifax hack?
Then ask them if they know that all that data is legally allowed to be used forever with minimal oversight other than what’s mandated by law?
All of the above is obvious to nerds like us but the public at large haven’t got a clue about any of this.
This is a very long and especially old discussion, I'd say it approaches 150-200 years, and it first started out as a reaction to the optimistic views of positivists like Auguste Comte. Afaik nothing conclusive has come out of it yet, even though the discussion focused on some heated topics during all this time: should the physicists that invented the atom bomb be held responsible for Hiroshima? should the chemists be held responsible for the mustard gas used in WW1? should the bureaucrats with their focus on modernizing the State and having everything well-labeled and well-counted for be held responsible for the fact that that made the Nazis' job a lot easier when they needed to find the Jews among the general population? etc etc
It’s about having a public debate to decide what laws are needed to robustly protect the public before the technology is even misused.
So in your examples it’d be:
- when the physicists inform the public, MAD is quickly determined as the outcome of that technology and laws made to prevent nukes being made in the first place. (And optimistically, to negate wars ever happening again due to the possibility that an aggrieved side might develop nukes) Or, because the public debate did not happen until after they were developed and deployed, to allow nukes to be developed and defer wars between nuke-enabled countries to hyper-diplomacy while making every effort to restrict development & deployment, like we currently have.
- when the chemists alert the public that chemical weapons are possibly far more effective than anyone ever imagined, then laws are made to ban their deployment. As is currently the case after informed public debate. But instead of informed public debate before the fact, millions were gassed during WW1.
- when the bureaucrats realist they can identify large swathes of the population unrestricted, laws can be brought in to restrict data gathering to protect the public from such actions.
In all of your examples, an informed public debate could have urged politicians to create laws to robustly protect the public from these technologies.
The key note to realize is that the public debate will happen. It’s inevitable. The key role that the creators of these technologies can play is to raise these ethical concerns before harm is risked. Even though the public debate did not happen until after all of the above technologies were misused, they resulted in swiftly creating laws to robustly protect the public from that technology. Delaying the debate does nothing other than risking disaster.
I know that last line sounds like hyperbolic exaggeration but it’s an inevitability that, if ignored, is unethical.
Or maybe a town refuses to flouridate their water after a series of propaganda campaigns convinces people there that it's toxic and will ruin the purity of their water system.
I understand why you cast it as you do. History is chock full of preventable abuses! Yet, it's worth considering that such things can backfire as well. Public debate is not a panacea, and policy is often rushed and poorly formed to satisfy atavistic fears. A public debate can easily urge politicians to robustly protect the public from imagined threats while enabling real ones.
But on the whole, the general public will drown out cranks who shout about imagines threats like fluoride or vaccines. The science on both is clear.
Likewise, I’m optimistic that the negative realities of net neutrality stifling startups and the mass data-gathering will become so apparent to future generations of lawmakers & voters that they’ll be reversed in due course.
The science may be clear, but it may be unwise to be too certain that future generations will agree with what you and I believe in today.
This pigeon-holing was way beyond tiresome, not to mention grossly inaccurate, 20 years ago: please STOP IT. Computer programmers aren't in general some special class of person incapable of learning new skills and behaviours. Quite the opposite for the most part.
Software engineering is a combination of applied mathematics, data science, and significant engineering practices (do X with Y constraints).
In Engineering proper, we have ethics classes. They entail in how things can fail and cause item, structural, and/or injury up to death.
We have all read about the Therac-25. That was a software and hardware design choice that lead to the death of many.
Computer science seems to skirt the ethics discussions just by being too new of a field. We (royal) aren't looking at the failure modes of having social media. Nor are we considering the ramifications of the code we write.
Intelligence is the ability to write the code. Wisdom is the ability to understand its ramifications.
When MySpace, Facebook, Twitter, et al. were created, the "consequences" were providing an outlet for expression or keeping connected with friends and family. Noble causes. How do we get from there to "...and guess what, it'll be bad for individuals or society in $THIS_SPECIFIC_WAY"? I don't think it's possible.
Ethics evolve with society. Society will develop the ethics necessary to teach computer scientists and programmers. Such things are already underway.
Further, I'm seeing too little nuance in the comments on this story. Someone puts a Raspberry Pi in a situation to be responsible for human lives and it fails - is Linus now culpable?
1 - https://www.computer.org/web/education/code-of-ethics
I won’t make a moral judgement on that decision but it is dangerously naive to think all technology has only positive externalities. The criticism of the current state of software engineering is that we don’t consider those externalities.
“It’s new and we didn’t know what would happen” is a really shitty excuse to hear from an aerospace engineer after an airline crash.
It's not clear to me that anyone can predict the consequences of technology even a few years into the future (much less the seventh generation that some folks talk about and that would, if adopted, bring progress to a halt), nor whether any elite should be trusted to decide whether to permit the adoption of some technology. (If you suggest the government, consider Trump and net neutrality.)
That’s the whole point of the article, no one person can ever figure that stuff out. That’s why the public needs tone informed. No matter how educated (malicious or not) an “elite” is, they’ll never be able to predict as much as the general public will.
Yet a wider public debate can.
Your example about greenhouse gases is a good example of this. Forget about the celebrities for a sec and look at the big picture:
The guys in the hydrocarbon industry were experts at their jobs: developing oil, coal & gas and selling those products. Likewise the car industry, experts at building and selling cars.
Their expertise is in building their particular products and selling them. They could not have been expected to understand the negative environmental effects. And moreover, it’s in their interest to ignore the negatives associated with their products because it’s much more profitable for them to produce cheap products that require little R&D.
It took insights from the general public outside those industries to realize how those products affect the population as a whole:
- Burning hydrocarbons releases far more CO2 than the planet can scrub via the carbon cycle, which has lead to the greenhouse effect. Not only that but certain types of coal produce the smog that once blighted our cities and literally killed people with weak respiratory systems. Not only all of that but the lead that was added to gas to prevent engines knocking, kills people too.
- Emmisions from cars not only add CO2 to the greenhouse effect but other car emissions such as carbon nanoparticles, Nitric Oxides, sulfuric compounds also result in measurable deaths among the populations living among those cars.
As a result, laws are passed to make cars release less emissions and also to reduce societal dependence on hydrocarbons. The public debate resulted in the population realizing that there are health hazards to both individuals in the short term and global climate in the long term, and resulted in laws being passed to reduce these health hazards as low as is possible right now.
Without informed public debate we’d never have realized that these negative effects occur and we’d never have saved as many lives as has been saved now that our cities are not drenched in lead and Nitric Oxide saturated smog.
So, what might the negative consequences be for An Input Sensitive Online Algorithm for the Metric Bipartite Matching Problem?
Besides being (ironically) arrogant, your dismissal sounds to me like you want to disavow any responsibility computer programmers have for what the technology they develop is used for.
> founded on an incorrect understanding of what kind of thing programmers are good at
In my personal opinion, if you're not always thinking of the applications and effects on society of your work, you're not a very fully-developed human being and certainly not the best programmer you could be, to say nothing of being a good citizen.
Take an example: your team has developed technology that can identify, with five nines accuracy, the yearly income of a user. That’s innovative, for sure. And nobody is taking the achievement away from your team.
But you don’t need to be Socrates to realize that what you’ve created could be used in horrifying ways. The article is asking creators to take a step back, ignore the excitement of creation for one moment, and consider if what they’re doing should be done. And if the answer is Yes, then decide if a public debate is required to protect the public from that new technology’s misuse.
Now the first reaction from many will be to scoff “fuck that, I’m just building an app here, I’ve got an investment to recoup and money to make”. That’s valid, but it’s an unethical way of looking at the act of creation. Feel free to hide behind the “I’m just doing my job” excuse, but we all know where that ended up.
That is the crux here, politicians can only make laws to protect the public at large from misuse of your newly created technology if they know about it. And the public at large can only demand robust protection (in the form of laws made by their politicians) from misuse of your technology if the public at large knows about it. Without a public debate, new technology could be misunderstood by politicians such that even if they are aware of the technology and even if they legislate, they may not make laws that robustly protect the public.
Thus, from an ethical standpoint, a public debate is needed so that robust laws are demanded and provided in order to protect the public at large from new technologies.
I think if we start to delve into the actual implementation of this, and look at real examples, it will be clear that this idea of "research that can be used for bad should be not published" leads to a bad type of society.
The physicists & engineers didn’t identify the MAD inevitability from developing nukes. The public did.
I’m not demonizing the physicists or engineers here either, how were they to know what the outcome of their developments could be?! But the public at large contains people who have other ways of looking at new technologies other than “this is innovative!” or “this will end the war!” this will be profitable!”. It’s these outsider viewpoints that are needed for the public to decide if laws are required in order to robustly protect the public from that new technology.
In specific implementation scenarios, these debates are pointless because we have the nukes & chemical weapons examples from recent history. The ethics are clear. I’m more speaking about developers of new technologies alerting the public at large about potential risks involved with their technology that would make robust laws protecting the public from misuse of their technology prudent.
But, in saying that, there are examples of programmers who could have alerted the public in the public interest, but kept their mouth shut instead such as the developers who developed the emissions cheat software for Volkswagen. Worse, they knew that their actions would lead to deaths among the populations in which those cars were sold because emission-related deaths have been intimately understood for decades now. That is a crime that was perpetrated at the frontlines of software development. Those guys knew what they were doing and their “secrecy” is nothing more than a bloody conspiracy to murder. They didn’t give a fuck how many have died from the emissions released by their cars during the decade they were on sale because “I’m just doing my job building the software I was asked to build” is an adequate excuse in their book.
Under that framework USSR's suppression of genetics in favor of Lysenkoist bullshit would be justified because an incomplete understanding of genetics could lead to eugenics travesties. It is like King Cnut trying to stop the tides only totally unironically and unsarcastically.
> It implicitly states that computer programmers are, as a field, competent to predict how a technology will impact society.
What the article is proposing is that creators of new technologies inform the public so that the general public can decide if new laws are required to prevent that new technology from being misused against the public at large.
There’s precedents here and I’ve written a ton of comments about them if you want to read a bit about why such a proposition is the only ethical option.
This is the problem, because you know what else is controlled by the market? Your employment. I fully encourage going on strike if asked to make surveillance tech, but you will be fired for it. Then some other firm will be hired, and, oh well.
If "the market" controls democracy, to the extent that the "democratic" government demands surveillance tech because someone on the market wants to sell it, that's the problem to be solved, not a shortage of research scientists martyring their careers for no effective gain against the larger social pathologies.
EDIT: what I mean is, roughly, markets are how we usually screw ourselves up in the first place.
And it's doing a great job of it.
In fact, they’re all actions that show that an informed public debate regarding new technologies is needed to protect the public from not only unscrupulous businesses or external threats but to protect the public from their own authorities.
Without public debate and a public that demands robust protection in the fit of laws, misused technologies represent a threat to everyone until that debate is had.
Someone else brought up the culpability of fission researchers for Hiroshima. That's one example of a major technical endeavor where even with historical hindsight people won't all agree on its ethics.
The article is proposing that that public debate happen during or before development so that laws can be made to robustly protect the public from a new technology in the event that it’s misused.
It is impossible to assess the risk of unknown unknowns, so rather than assailing someone with adjectives and persecuting them with the crime of “arrogance” and “hubris,” just judge less and listen. If they’re wrong, move on.
Look at the mirror before you call another one arrogant.
why? aren't we allowed to judge the wrong? nothing wrong in calling someone arrogant if he's arrogant.
Better than invective is a genuine and sincere mutual examination of the facts — actual content rather than mindless reputational slander of an entire field of study.
Your comment would be much better without its first and last sentences. The inverse of a shit sandwich!
The mistake he made was that he only alerted authorities. Which lead to atom bombs being developed in secret without any feedback from the public as to whether this was a direction that they supported.
What we need in 2018 and beyond is for current and future developers of new technologies to alert both authorities and the public so that politicians can legislate for that technology’s use with public input as to what limits are deemed appropriate by the public at large.
The deep fakes example highlighted elsewhere in the thread is a perfect example of this. How can politicians legislate for this technology when they don’t even know it exists? How can the public indicate to those same politicians that they feel that deep fake technology used to create revenge porn is something that the public wishes to be made illegal?
Laws, and society as a whole, are a feedback mechanism that rely on information. Those with that information have a moral obligation to alert the public to consequences that will affect them all.
The risk there is that if you dithered the enemy could get it first.
If Pandora's box exists, there is only so much you can do to stop someone from opening it.
The key point is that without the public’s input, politicians acting rationally, decided to direct their engineers & scientists (also acting rationally) to develop these weapons in secret, with everyone involved knowing that nukes were “extremely powerful” and “terrifying” and “war-ending” but not fully appreciating the whole MAD aspect that happens when your enemies play development catch up.
Tons of technologies today, while not as much of an instant threat to our civilization as nukes, are loaded with ethical concerns that current politicians and the public at large are just not aware of. Just reference the deep fakes saga for one recent example. There’s currently no laws outlawing the use of deep fake techniques to produce revenge porn but there’s every reason to believe that these techniques will be outlawed for such use in forthcoming legislation now that the public and politicians are aware of the risk. Extrapolate that to any number of innovations. With informed debate we can keep laws ahead of the game and prevent new techniques and technologies from being legally used for unethical purposes.
But sarcasm aside, a variation of the exact same quip could be made for a privacy and other assessments. I would argue that it is on other disciplines to keep up with what's going on in comp.sci to determine the impact of new findings on their fields, and for all STEM fields to be very careful not to invent administrative policing roles that can grant unmerited standing to arbitrary political interests.
If you are in privacy, you need to be on this. If you are in economics, political science, or equity fields, you would also need to be on this. Technology makes everyone necessarily more interdisciplinary, and instead of filtering out, they should be researching new developments that affect them.
My fear is that if Computer Science doesn't start acknowledging the ethical consequences of the work being done that it will lead to a sharp increase in regulations. This fear is primarily held with self driving cars which some seem to have been rushed into production and have lead to some serious consequences.
None of those ethical frameworks would lead me to conclude that, say, self-driving vehicles are net-negative for society.
My personal experience conducting research in computer science is that at least some of it is too abstract to have clear social consequences. What are the social consequences of making it marginally easier and faster to produce chip designs that are easy to manufacture?
They gathered a bunch of data and had a meeting in Asilomar (a pretty conference center in the middle of nowhere on the CA coast). In addition to the scientists who actually had the technical ability to splice genes (~100 humans at that point had the requisite skills), some lawyers and philosophers also attended. They argued a bunch. And at the end of the day, they agreed on a voltunary moratorium on splicing, even though the argument for immediately, direct harm had not yet been made (precautionary principle).
Very reasonable predictions were made. Some of the more irresponsible claims were tamped down as being overly irrational. Some basic rules were applied on how to minimize collateral damage. Some strong rules were applied on disease-causing experiments.
It seems, looking back with the luxury of historical distance, that they chose wisely. It was probably because the level-headed people who focused on clear and present dangers, not absurd hypotheticals, managed to win the day.
Today, the logical conclusion of those decisions are just starting to be felt; we now have technology that permits us to make germ-line modifications (those are permanent, inherited changes), and we're starting to do actual trials with newly born children (this is, IMO, one of the most extraordinary moral and philosophical developments of my lifetime).
It's unclear to me whether CS has the same level-headed maturity that lead to this. Nor do I think people in CS can really predict the indirect outcomes of their technology. Instead, I think CS people have an obligation to push the technology as far as it can go within a community-determined set of boundaries, and report the direct implications thereof to the general public, who will then (indirectly through democratic mechanisms) create laws that restrict certain applications.
People working in CS are mostly in industry and do not have 10+ year timeframes to leisurely discuss ideas.
And what CS inventions are as impactful as atomic bombs (as someone mentioned above?) I cannot think of any CS inventions that warrant such extended ethical discussion.
The only exception would be computer scientists working on weapons in the defense industry. They should absolutely be held back by ethical concerns like this.
I have a fair amount of expereince, being a biologist who did gene cloning who now works on large scale machine learning models.
TBH I can see a wide range of surveillance-related technologies enabled by CS that are as impactful (but not as explosively so) as atomic weapons (I think you actually mean ICBMs, since atomic weapons aren't an existential threat, while ICBMs are).
If anything, this should hint at the difficulty of forecasting. The full impacts of some technologies cannot be understood without also knowing what they synergize with.
That said, ICBMs with (conventional) thermobaric warhead could do a lot of damage- you could effectively take out a medium sized country's industrial sector in an hour.
To this end, we need to distinguish computer science (theory) from technology (application) – it is the latter of which has direct effects on society.
You shouldn't expect a computer scientist to deeply understand sociological/psycological/philosophical questions, but they can and should to some degree identify where knowledge from other fields is important. My understanding is that they are not asking researchers to predict the future (although this is apparently what the interviewee imply), but simply identify that there might be (societal) consequences, good or bad.
This is why in areas like social robotics and HRI you will find more and more articles that are written by researches from different areas. E.g. a Computer Scientist and a Psychologist.
But are such questions important? Very.
Ethics is not CS' strong point. Adding a statement "this is why I think this research is ethical" doesn't seem like a bad place to start, if only because it would force researchers to actually consider what they are doing.
Seems like a double edged sword, potentially making it easier to negatively utilize the research.
I imagine if all 3D printers were distributed with warning stickers saying that they can be used to print guns and knives that more people would try it than if it was less publicized.
Making it just a bit harder to discover the knowledge to weaponize something could give defensive researchers the time to build effective countermeasures.
In your example, the result shouldn’t be warning stickers, it’s laws specifically against building guns with your 3D printer. Or if right to weapons happens to be enshrined in your constitution, it’s aboyt legal liability protocols being amended to take into account the fact that a perpetrator used a weapon that they built in their 3D printer as opposed to a regularly procured weapon.
Think about some of the most famous and influential papers and try to come up with negative consequences. When Codd wrote A relational model for large shared data banks, should he have tried to think of ways big databases could be abused? When Ritchie and Thompson published the paper that introduced Unix, what could they predict? Or Knuth's many papers on compiler design?
Do other disciplines do anything similar? Computer science papers are often so close to mathematics, I suppose mathematics papers should include similar disclosures, right?
This feels a little like putting cancer warnings on everything in California. At least there I understand what they thought the outcome might be but maybe they didn't foresee how quickly people tune it out and you end up making it harder to communicate danger when it's actually there.
Also, this isn't the job of peer review. Peer review is hard enough, let us focus on reviewing the technical details, since that's what matters.
Laws, and society as a whole, are a feedback mechanism that rely on information. Those with that information have a moral obligation to alert the public to consequences that will affect them.
Also, I kind think the secrecy of the manhattan project was essential, and after carefully reading multiple histories, I am convinced that Leslie Groves was a genius who did more than a good job managing the project. If you look at how he declassified the project, they did everything right. THey had a historian with access to all the data. And scientists who understood the context. And they worked together to release as much information as they possibly could, and even helped make the case for transitioning control of weapons to civilians.
But the point isn’t anything to do with any of that. The point is that without the public’s input, these weapons were developed and used in the very first place, without an informed debate to decide if this is how we want to conduct ourselves.
This kind of debate was as common back then as it is today and back then it lead to chemical weapons being banned after WW1 by the UN with the support of pretty much every country.
And since then, for the few years that the US was the only one with nukes, if not for the extreme restraint practiced by the higher ups, they might have happily used nukes more times (at the time, there was huge pressure to just nuke north Korea to end the Korean War).
Once it became clear that the “enemies” will catch up in the development race meaning that any future war would be nuclear, the public debate clarified very quickly upon the current philosophy: MAD is the only outcome in a nuclear war.
If this debate had been allowed to be fully conducted in an open and informed manner before nukes were developed, we might live in a world where nukes were outlawed at UN level in the same way that chemical warfare was, long before they were even developed in the first place.
Extrapolate that thought to aggressive data-gathering & storage by social media sites, genomic information ancestry services, tracking technologies and techniques developed in the name of marketing, facial recognition technologies by security firms, Three-Letter agencies recording and monitoring every web user’s actions, profiling techniques to identify depressed users etc etc etc. Right now laws are not robustly protecting the public from misuse of these technologies. In fact, a lot of the misuse of the above technologies is directly due to the fact that the politicians know that they’ve got a powerful technology at hand and decide to develop it. And when informed public debate happens, when the negative outcomes of misuse of these techniques becomes so obvious to the public at large that they demand political action en masse, laws with more robust protections for the public’s data will be forthcoming in future updates. But those are the technologies that we know about. (At least, we nerds!). What is currently in development that has similar potential to be weaponized or misused that none of us know about yet?
I do so because I think any nation which doesn't do so will be replaced by one that does, and surviving is better.
By the way, I've already kind of gone through these sorts of thought experiments, and voted with my genome: I open sourced the data in my genome voluntarily (https://my.pgp-hms.org/profile/hu80855C) because I don't really ascribe to the "grim meathook future" scenarios you're describing.
In general I think informed public debate is great, but in the specific case of the Manhattan Project, I really don't think having an informed public debate during the war was even a remote possibility.
But the authorities kept it secret, choosing to develop these weapons.
But, had those weapons not been developed in secret and an informed public debate occurred before development, that debate would have inevitably lead to the MAD doctrine. That is the only inevitable outcome of nuclear war.
Without the guidance that MAD provides, politicians in 1939 did what they thought was the rational choice, and chose to develop these weapons so that they weren’t left unprepared if the enemy developed their nukes. But with the knowledge that MAD is the only outcome in a post nuke world, totally different scenarios become possible. They may have chosen to take action at UN-level, anti-nuclear proliferation treaties even before they got developed? Stifling the public debate just delayed the notion of anti-nuclear proliferation treaties but it did not stop them. The only result of an informed public debate on nukes is anti-nuclear proliferation treaties.
If all that had been done in 1939 as opposed to the 50’s & 60’s, the war could have been prevented before it even began!
Regardless of the crazy whataboutery, the lesson is that the public debate resulted in the ethics being decided as: “nuclear technology leads to MAD when used to create weapons, therefore laws are needed to protect the public from misuse of nuclear technology”. This debate is needed for every new technology.
Technology moves fast, faster than ever these days. And currently, we’re used to a situation where we typically operate these debates retroactively, legislating against misuse after the fact, when new technologies are created and then misused. Whereas, as this article is highlighting, the ethical way of doing things would be: to have informed public debates that allow laws to be created to robustly protect the public before a new technology is misused.
Note that nobody is accusing “new technologies” as bad. Or evil. Or anything like that. The call is simply for creators to slow this public debate to take place openly, to decide if the new technology can be misused and its repercussions in the event of that misuse and if new laws are needed to protect the public, before the possibility of their new technology being misused is even a factor.
Nobel predicted that the dynamite will end all wars. Turns out, he was very wrong. On the other hand, plenty of technologies in recent decades have created predictions of impending dystopias, none of which materialized.
This all sounds like trying to blame scientists for not telling us the bad people will do bad things, instead of stopping bad people when they're doing bad things.
- police state actions and encouragement: entirely possible, especially now that we’ve all got location trackers that also contain our innermost thoughts in the form of our text messages, yet using all of that info against us is totally illegal.
- subliminal messaging & propaganda: never more possible than right now with our totally connected lifestyles, yet totally illegal.
And on and on. Just because the worst hasn’t happened doesn’t mean it wouldnt happen if left unchecked.
Also, Nobel thought he’d created the most explosive substance that would ever be possible, with hindsight we know that TNT wasn’t that but nukes absolutely are; and his conclusion was correct in the context that nukes are the next step up from TNT: with such powerful weapons, battlefield wars between those who hold nukes have effectively ended. (Proxy wars notwithstanding, if those countries had nukes they wouldn’t be used as pawns in a proxy war.)
I think the societal consequences are "what matters"
The technical details _literally_ don't matter without societal consequences
Since we can't predict the consequences of our actions (few scientists, if polled, would accurately predict the consequences of their papers), it would be entirely speculative.
Imagine a paper that showed an amazing improvement in machine learning- we could suddenly predict diseases just by looking at pictures of people's retinas, or predict any number of other fairly private things. Shoudl a technical reviewer say "I think this could and will be used to discriminate against people, so it should not be published"? I'd far rather the reviewer just say whether there was an error, have the paper be published, and let the faculty of 1000 or the larger community debate the legality of whether the technology could be restricted.
Note that editors, not peer reviewers, have far more latitude to reject papers due to (their perceived, speculative) societal consequences.
Over time, people will probably turn this task into copying and pasting a standard list for their research area. Existing researchers still mostly won't have to think about societal impact, but maybe it'll have an effect on which research areas people choose to work in.
That council would then have the task of reviewing new technologies and their potential impacts. The council may then have different subgroups each discussing a certain technology, or possibly each with a certain specific task such as: reviewing research papers, watching the developments inside companies, writing legal propositions, etc...
This council could then be under a government agency, or better yet, an NGO that can get funding by the government, from donations, universities, etc...
What would you think about that?
A council, no matter how large, is, relative to the general public as a whole, made up of a small number of individuals.
Such a council would raise so many ethical concerns, such as: Who are these individuals? Why are they on the council? What are their biases? What does their livelihoods ultimately depend on? What conflicts of interest do they have? Among hundreds of other questions.
Only an informed debate among the general public can lead to laws that robustly protect the public at large because vested interests get drowned out by informed dissent from other, equally-qualified, voices who are not affected by such vested interests.
This article is about what does happen now, and is stigmatized against being disclosed due of fear of personal reprecussions.
That was never the point of academia. These institutions exist with all their rules and regulations precisely because to discover and uncover, one must not be afraid of how it will affect oneself from keeping a roof over one's head.
Presently, this article is being severely misinterpreted.
My career in technology has taught me its a lot more obvious. When your marketing team start buying massive collected data, or your sales team starts selling ad data to 3rd parties. We know things happen, and the grey morality of it. Yet we turn our backs to it.
The reason I think HN is so quick to jump on these topics is because if we search ourselves we see that our morals rarely stand up to our self-proclaimed ideals. Jobs are sometimes hard to find, stability is nice, and dying on that hill doesn't always seem important.
Those are negative societal consequences we are most dishonest about it. Not Promethean illusions of grandeur.
And it's very difficult for even the experts to balance the positives against the negatives: facial recognition could be used to find a serial killer ... or a missing child.
As one example, RMS has long advocated against the "ethics" of certain technologies and companies, while Emacs and GCC have greatly accelerated the adoption of technology, generally. And those tools have almost certainly been used directly or indirectly to build some very, very bad things.
It's kind of like using an Ox in modern farming because you want more farmers to have work. You wouldn't do that because your farming business (barring some great marketing that lets you recoup the costs) would just go bankrupt quickly.
I see where you're coming from but that's quite an oversimplification, don't you think? Software that drives a computer's graphic card, for example, which in turn gets used to run software that enables a physician to explore a 3D MRI, are just two examples of tasks that do not consist of automating manual labour.
(You could probably pay a lot of painters to paint a lot of pictures but hooking them up to the MRI might be a little difficult).
There's a lot of software that achieves new things, rather than old and well-known things really fast and without involving hands or a human brain.
I prefer to avoid discussing ethics on forums like HN, so I won't comment on whether or not working on software that puts people out of work is "the right thing to do" -- suffice to say that, if you wish to avoid doing that, there's plenty of room to do it while still doing your job as a software engineer.
Precisely. And the only way those new things are thought up is if someone has the time to dedicate to the task. If that person instead has to spend 12 hours a day foraging for berries just to stay alive, then they won't have time to build 3D MRI code.
The gains to productivity brought about by mechanical -- and now digital -- automation have lifted billions out of poverty and extended their lifespans by decades. I agree with GP that the only purpose for software is to automate a human task. Lucky for us, software is doing things that humans can't do at all (or would take years of effort, in the case of your MRI picture).
By the way, you can always find a "cost reduction" in anything you do, it's a matter of point of view and horizon.
Now, one farmer can work a hundred acres before breakfast with crop yields higher than ever due to technology. Would you have us go back to an ox and a plow so as not to kill agricultural jobs? Same story for textiles. Should we reopen the vast mills that employed thousands? We don't have elevator operator jobs. The printing press put all the scribes out of business.
Contrary to your viewpoint, the economic thing to do would be to automate as much work as possible. When labor is more productive, you can earn the same money in less time. This is how we can afford to not work weekends, not because of labor laws. When part (or all) of a person's job is automated, it frees them up to do something else. More importantly, they can take some workload from someone else's plate, freeing the other to do higher-order work.
How many brilliant minds throughout history never made a significant contribution because they spent their lives toiling in some field or hunting/foraging for their own food? It is only when material needs are easily obtained that the time is freed for the intellectual to think. Historically, this was only the aristocracy, privileged from birth, that had free time because of their serfs and vassals doing the labor. Compare to today, where anyone with enough intelligence can make such contributions, precisely because they're not trapped in a brutal subsistence lifestyle. (Yes, there are tons of issues about education and opportunity here. But we don't have to gather our food to survive anymore.)
However, while automation helps in the aggregate, the world isn't lived or experienced in the aggregate but on the individual level. Put Jeff Bezos and 9 people living without shelter or money in a room, and in aggregate their average wealth is $15 billion per person; but that number tells you nothing about the individuals. When the factory closes due to technological advancement, sometimes those individuals can't get another job or one that pays nearly as much.
So we need to both encourage technological advances that increase productivity, but make sure it benefits individuals. I think the basic principle is that capital can move much more quickly than people - e.g., you can pull the money out of a factory in Kokkata or Indianapolis and invest bit in one in Hanoi much more quickly than the workers can move to Hanoi (probably impossible) or find another job anywhere. The answer may be much better unemployment insurance, retraining, and laws slowing economic changes enough that individuals can keep up, to a degree based on the economic impact: e.g., closing a big factory in a small town is higher impact than closing a startup in London.
Of course, society has to be able to sustain and re-educate these people for new productive jobs. That's the part where most societies have failed, leading to civil unrest.