
Should computer science researchers disclose negative social consequences? - rbanffy
https://www.nature.com/articles/d41586-018-05791-w
======
rossdavidh
I've realized there's another thing about this proposal that bothers me: it's
basically waterfall software development. Agile is by no means a panacea, but
there's a reason it replaced the "specify in advance what everything will need
to look like and how it will work together" method. Instead, you need to just
start building pieces that move you towards your eventual goal, roughly
envisioned, and know that you will need to iterate on that frequently to
respond to what you learn along the way. The motivation of this proposal seems
to be that we can know in advance what the impact of a new piece of technology
is, and therefore we can just avoid ever developing the bad stuff, so we won't
have to deal with the hard work of changing it later.

But, we cannot, and it's dangerous to think we can. After we develop
technology, we will be surprised at how it is used and what the impact is, and
we will have to respond to that, sometimes with more technology and sometimes
with legislation and sometimes with changes in social mores. Thinking that
developers can just anticipate in advance what will happen, is at best
delusional, like thinking that if we spend more time on the specifications
part of our next waterfall project, we can do better.

Moreover, if Tim Berners-Lee had specified in advance, "this world-wide web
thing will probably eviscerate all advertising-supported media, from
newspapers to magazines to radio and television", I don't know that we would
have made wiser choices as a result of that.

~~~
BoiledCabbage
What brother's me about this comment is the implicit assumption that waterfall
is a universal bad and things that are less waterfall more agile are
good/better.

This is incredibly false. Agile (or non-waterfall) style development is useful
in specific limited cases. Where you're creating something where the cost of
failure discovery is small. You're writing a mobile app, or a b2b website, or
some other common problem. It's cheap and easy, and the cost of a failure is
so small that it's cheaper to just fail and go from there, then to spend
anytime thinking about how something may fail.

If you're doing something mission critical, space shuttle, something involving
expensive parts, something high risk, you make sure you do thorough
requirements analysis, well specified behavior, and well defined failure
modes. You break the system down into individual small components (the more
critical the failure the smaller the components) create well defined
specifications for each component, test the he'll out of each one
indicidually, ensure they work to spec and then combine them. You don't Agile
your way through it.

Agile, you'd just get something minimal working end to end and iterate. If the
thing you're working on can explode you don't just iterate through a minimal
user story. You define each item well and ensure it meets it's specs and you
understand it's failure modes.

Same thing for the topic at had on impact of technology. Understand the
failure modes of the components you're creating - and write them in your
paper.

~~~
j-pb
I present you SpaceX as an amazing counterexample to "agile don't work for
things that explode".

I would much rather sit in a rocket that has blown up a 100 times on the
launch pad but launched succesfully a thousand times more with each time being
an opportunity to improve it, than one that has been engineered with waterfall
and never flown...

[https://spacenews.com/pentagon-advisory-panel-dod-could-
take...](https://spacenews.com/pentagon-advisory-panel-dod-could-take-a-page-
from-spacex-on-software-development/)

------
slededit
I find this new trend of trying to predict how technology will be used - and
presuming its users will either be evil or too stupid to use it responsibly
incredibly patronizing.

Engineers are not philosophers and should not be placed in this role. We do
not have the tools to do it. As with any human we have a duty to consider
first order effects - but even the most astute philosopher will have trouble
anticipating second order effects. This burden should not be placed on those
least qualified to perform it.

Edit: Thank you to everyone for your thoughtful replies. I find this topic
fascinating to discuss.

~~~
vanderZwan
> _I find this new trend of trying to predict how technology will be used -
> and presuming its users will either be evil or too stupid to use it
> responsibly incredibly patronizing._

People who create something, who research something, should by definition have
a better grasp on their topic than everyone else. To expect that they are more
knowledgeable, and therefore better equipped to see what it means, and
therefore also have a civic duty to explain this to less well-informed, not
not patronizing. It's acknowledging their expertise.

> _Engineers are not philosophers and should not be placed in this role._

This is an interview in _Nature_. Academics with the ambition to publish in
academic journals, be they prestigious like Nature or Science, or smaller and
more niche like <insert your favourite journal here>, _are_ taking on a role
of a philosopher.

And sure, engineers may "just" apply technology by comparison, but honestly I
find that a very dismissive attitude as well. They are the "praxis" to
academia's "theory", meaning that what they do actually has consequences in
our lives. So if anything, their works comes with even more of a moral
obligation to "Do The Right Thing" than that of academics.

~~~
slededit
> People who create something, who research something, should by definition
> have a better grasp on their topic than everyone else. To expect that they
> are more knowledgeable, and therefore better equipped to see what it means,
> and therefore also have a civic duty to explain this to less well-informed,
> not not patronizing. It's acknowledging their expertise.

They are experts in how the technology works. That doesn't make them an expert
in how other humans will take advantage of what they have discovered/created.
I've taken the charity of assuming the scientist or engineer themselves had
non-evil aims.

> This is an interview in Nature. Academics with the ambition to publish in
> academic journals, be they prestigious like Nature or Science, or smaller
> and more niche like <insert your favourite journal here>, are taking on a
> role of a philosopher.

Yes, but a philosopher on which topic? A doctorate from an engineering
background is not equally as educated as someone with degrees in history or
other studies of human nature in regards to the societal effects of
technology.

~~~
s-shellfish
Doctorate of philosophy.

The Ph in PhD actually means something. It's not some kitschy decoration.

~~~
dekhn
The Ph in PhD refers to natural philosophy, which diverged from moral
philosophy early on.

~~~
s-shellfish
How is natural philosophy different from moral philosophy?

I'm sure that comes across as a dumb question, but, modernity - technology.

The irony of mathematics being naturally built in, reinforced into all of us,
over and over. Computer science.

From wikipedia:

> From the ancient world, starting with Aristotle, to the 19th century, the
> term "natural philosophy" was the common term used to describe the practice
> of studying nature. It was in the 19th century that the concept of "science"
> received its modern shape with new titles emerging such as "biology" and
> "biologist", "physics" and "physicist" among other technical fields and
> titles; institutions and communities were founded, and unprecedented
> applications to and interactions with other aspects of society and culture
> occurred.[1] Isaac Newton's book Philosophiae Naturalis Principia
> Mathematica (1687), whose title translates to "Mathematical Principles of
> Natural Philosophy", reflects the then-current use of the words "natural
> philosophy", akin to "systematic study of nature". Even in the 19th century,
> a treatise by Lord Kelvin and Peter Guthrie Tait, which helped define much
> of modern physics, was titled Treatise on Natural Philosophy (1867).

> "systematic study of nature"

And what do we do? What is nature to study? What is life to study?

Our own puzzled face, staring back at us.

------
ksdale
I agree that it's unlikely that anyone can predict broad, long-term
consequences of any particular technology. On the other hand, the people who
design Facebook or mobile games to be maximally addicting definitely
understand something about how their software is affecting society. It's not a
huge leap to go from "I'm pretty sure that certain people will become obsessed
with this," to "It's probably harmful for lots of people to become obsessed
with this."

If a person can predict the consequences of design decisions well enough to
get paid for it, they'll probably have some idea of potential negative
consequences as well.

~~~
rossdavidh
There may be cases where negative consequences can be predicted, but this sure
isn't it. Of all the predictions I heard about Facebook when it was new, "fake
news" and "depression from comparing yourself to friends out having fun" and
"an increasingly clickbaity and outrage-stoking newsmedia" were not among
them. Not even close. Facebook is a great example of how programmers are not
good at predicting negative consequences. The predictions I heard were mostly
based on comparisons with Myspace and Livejournal. We are not good at this.
Plumbers shouldn't try to do your electrical wiring, doctors shouldn't try to
manage your database, legislators shouldn't try to pick your programming
language, and programmers should not try to determine how society will react
to technology.

~~~
ksdale
I'm not sure it's so much a matter of picking negative consequences out of the
blue sky of possible consequences, but rather wondering, "What happens if our
software works exactly how we intend and we gain the ability to really
manipulate people in a certain way?"

I agree completely that it's hard or impossible to just predict in general how
society will react to technology. But in many cases, there are people
designing systems specifically for the purpose of getting people to behave a
certain way. This implicitly acknowledges that they believe that they can
affect behavior in a predictable way. If they can make predictions one way and
be correct enough to profit from it, they should be capable of making
predictions about things that could go wrong.

I agree completely that predicting something like what Facebook will look like
10 years in the future is impossible, and there's surely a lot of research
where the possible consequences are infinite. A lot of results in CS research
are like someone developing a new material - "Like, sure, this could be used
to make a new, extra deadly tank or something, but it could be used for
literally anything." In that sense, it's not useful to make predictions, but I
think there's also a lot of research going on at Facebook and Google that
looks closer to what I mentioned above, where they absolutely have an
intention to change people's behavior and so probably also have the ability to
predict the consequences of their complete success.

~~~
sdrinf
These are two different set of people: computer scientists develop algos / ml
/ etc, and want to publish them. UX designers , entrepreneurs, and other
customer-focused people in the industry observe human behavior, and (amongst
other activities) design traps around them.

It is not useful to expect good predictions from the former group, and the
later group isn't really interested in publishing in CS.

~~~
0xdeadbeefbabe
The Challenger disaster is a good example of the engineer vs group dynamic
[https://www.npr.org/sections/thetwo-
way/2016/03/21/470870426...](https://www.npr.org/sections/thetwo-
way/2016/03/21/470870426/challenger-engineer-who-warned-of-shuttle-disaster-
dies)

------
ssivark
I see multiple top level comments spouting righteous indignation about this
being patronizing to engineers/programmers, expecting too much of them, 1st
order -vs- 2nd order effects, etc.

Firstly, this is for _researchers /academics_ who are _publishing_ in
journals, disseminating ideas for new technology.

Secondly, the suggestion also clearly states that the work will be reviewed
NOT by judging the utility in the possible consequences, but for the
thoroughness of the analysis. In practice, that means that the paper's authors
can not be more sloppy than the reviewer (a peer in the field who can be
assumed to have a roughly similar understanding of the technology). Reviewer
feedback can point out any blind spots of the paper. That strikes me as a
reasonable and concrete setup unlike the FUD in these discussions.

 _Nobody is being asked to balance the positives against the negatives. They
are simply requested to initiate a conversation about the possible negatives
so that other researchers building on the idea can work towards improving
those aspects!_

(EDIT: Most papers/presentations already talk about the possible positive
impacts -- nobody holds back on that for lack of expertise. It is only fair to
ask authors to be less partisan and more thorough, as responsible
researchers.)

Whether all engineers/programmers should take analogous responsibility in
trying to anticipate the consequences of their work is a related, but
different question, with different trade-offs involved -- because engineers
are concerned more with building things and delivering practical benefits,
while research (and academic scholarship) is typically more interested in a
longer term view of things. Both are important but distinct questions. Please
don't conflate the two points in this discussion.

~~~
BrandonM
I’m not sure enough people actually read the article, though. Here’s what the
person proposing this change said:

 _> In my first ever AI class, we learned about how a system had been
developed to automate something that had previously been a person’s job.
Everyone said, “Isn’t this amazing?” — but I was concerned about who did these
jobs. It stuck with me that no one else’s ears perked up at the significant
downside to this very cool invention._

Statements like that are frightening to me. It’s incredibly hard to define
“negative“ in a reasonable way, and I guarantee you that someone else is going
to define it in a way you don’t like.

What’s the negative impact of the recent discovery of a non-quantum
recommendation algorithm on the field of quantum computing? Should the young
researcher have spent time considering and detailing that?

~~~
rflrob
> What’s the negative impact of the recent discovery of a non-quantum
> recommendation algorithm on the field of quantum computing? Should the young
> researcher have spent time considering and detailing that?

If we consider the goal of research to improve society, then yes, I think it's
reasonable to expect a researcher to spend some time considering whether what
they're doing is going to actually improve society. In biology, most funding
agencies require you to take a brief (~8-10 hours of class time) course on the
responsible conduct of research, even if you aren't doing any human subjects
research.

In the specific case you gave of the non-quantum recommendation algorithm, the
negative impacts should be more or less the same as the impacts of the quantum
algorithm, but with less of a barrier to entry. And even the person proposing
this acknowledges: "We all agree it’s hard and that we’re going to miss tonnes
of them, but even if we catch just 1% or 5%, it’s worth it."

~~~
QML
I think in the case mentioned, the implication is that advertisers would be
better equipped to target us with ads – which is a fair point.

But why must we burden the theorists with ethics, and not the programmers or
companies who bring that theory into the real world?

------
rossdavidh
I get that they're trying to encourage ethical behavior, but this proposal
seems incredibly arrogant. It implicitly states that computer programmers are,
as a field, competent to predict how a technology will impact society. Whether
or not ANY field is competent to do that is questionable, but whether or not
computer programmers are able to do that is not questionable; it's clearly
incorrect. We don't have the background, training, skillset, or in some cases
even personality to be making insightful predictions about how society will
respond to and be impacted by a new technology. I'm not saying the person
advocating it is a bad person, but this particular proposal is arrogant (and
founded on an incorrect understanding of what kind of thing programmers are
good at).

~~~
tremon
They're not talking about computer programmers though (I agree they don't have
the ability). The more interesting question is whether Computer Science (the
academic discipline) should focus more on the societal impact of computing
instead of solely on the technical implementation of computing (i.e. applied
mathematics).

~~~
bartread
> They're not talking about computer programmers though (I agree they don't
> have the ability).

This pigeon-holing was way beyond tiresome, not to mention grossly inaccurate,
20 years ago: please STOP IT. Computer programmers aren't in general some
special class of person incapable of learning new skills and behaviours. Quite
the opposite for the most part.

~~~
tremon
I'm not sure I understand why you call this pigeon-holing. The GP explained
that the professional field of programming has very little overlap with the
professional field of social dynamics, and I was simply agreeing with that
observation. "Most programmers don't know how society will evolve" appears to
me a similar observation as "most pilots don't know how to build a plane". I
don't really see why that's an objectionable statement.

~~~
crankylinuxuser
This is, at its heart, a question of ethics in engineering disciplines.

Software engineering is a combination of applied mathematics, data science,
and significant engineering practices (do X with Y constraints).

In Engineering proper, we have ethics classes. They entail in how things can
fail and cause item, structural, and/or injury up to death.

We have all read about the Therac-25. That was a software and hardware design
choice that lead to the death of many.

Computer science seems to skirt the ethics discussions just by being too new
of a field. We (royal) aren't looking at the failure modes of having social
media. Nor are we considering the ramifications of the code we write.

Intelligence is the ability to write the code. Wisdom is the ability to
understand its ramifications.

~~~
delinka
From whence does this wisdom come? Experience by oneself, or learning of the
experiences of others. How does one gain experience in, say, "having social
media" if we've never had social media until the last few years? Social media
hasn't been around long enough for a single complete human generation to have
experienced life without it.

When MySpace, Facebook, Twitter, et al. were created, the "consequences" were
providing an outlet for expression or keeping connected with friends and
family. Noble causes. How do we get from there to "...and guess what, it'll be
bad for individuals or society in $THIS_SPECIFIC_WAY"? I don't think it's
possible.

Ethics evolve with society. Society will develop the ethics necessary to teach
computer scientists and programmers. Such things are already underway[1].

Further, I'm seeing too little nuance in the comments on this story. Someone
puts a Raspberry Pi in a situation to be responsible for human lives and it
fails - is Linus now culpable?

1 - [https://www.computer.org/web/education/code-of-
ethics](https://www.computer.org/web/education/code-of-ethics)

~~~
mulmen
It doesn’t take much study of history to see how cults form or the power of
publishing anything you want as fact. Social media put the tools to do that in
the hands of everyone.

I won’t make a moral judgement on that decision but it is dangerously naive to
think all technology has only positive externalities. The criticism of the
current state of software engineering is that we don’t consider those
externalities.

“It’s new and we didn’t know what would happen” is a really shitty excuse to
hear from an aerospace engineer after an airline crash.

~~~
jejones3141
A crash is the sort of outcome an aerospace engineering company should be able
to predict. Celebrities burning huge quantities of aviation fuel on their way
to and from conferences to pontificate about greenhouse gases isn't.

It's not clear to me that anyone can predict the consequences of technology
even a few years into the future (much less the seventh generation that some
folks talk about and that would, if adopted, bring progress to a halt), nor
whether any elite should be trusted to decide whether to permit the adoption
of some technology. (If you suggest the government, consider Trump and net
neutrality.)

~~~
onceKnowable
>It’s not clear to me that anyone can predict the consequences of technology

That’s the whole point of the article, no one person can ever figure that
stuff out. That’s why the public needs tone informed. No matter how educated
(malicious or not) an “elite” is, they’ll never be able to predict as much as
the general public will.

Yet a wider public debate can.

Your example about greenhouse gases is a good example of this. Forget about
the celebrities for a sec and look at the big picture:

The guys in the hydrocarbon industry were experts at their jobs: developing
oil, coal & gas and selling those products. Likewise the car industry, experts
at building and selling cars.

Their expertise is in building their particular products and selling them.
They could not have been expected to understand the negative environmental
effects. And moreover, it’s in their interest to ignore the negatives
associated with their products because it’s much more profitable for them to
produce cheap products that require little R&D.

It took insights from the general public outside those industries to realize
how those products affect the population as a whole:

\- Burning hydrocarbons releases far more CO2 than the planet can scrub via
the carbon cycle, which has lead to the greenhouse effect. Not only that but
certain types of coal produce the smog that once blighted our cities and
literally killed people with weak respiratory systems. Not only all of that
but the lead that was added to gas to prevent engines knocking, kills people
too.

\- Emmisions from cars not only add CO2 to the greenhouse effect but other car
emissions such as carbon nanoparticles, Nitric Oxides, sulfuric compounds also
result in measurable deaths among the populations living among those cars.

As a result, laws are passed to make cars release less emissions and also to
reduce societal dependence on hydrocarbons. The public debate resulted in the
population realizing that there are health hazards to both individuals in the
short term and global climate in the long term, and resulted in laws being
passed to reduce these health hazards as low as is possible right now.

Without informed public debate we’d never have realized that these negative
effects occur and we’d never have saved as many lives as has been saved now
that our cities are not drenched in lead and Nitric Oxide saturated smog.

------
yontherubicon
I don't think this matters. To my knowledge, much of what would cause the
negative social consequences isn't necessarily coming out of the academy, or
up for peer review. Instead, it's coming from companies who handle massive
amounts of data and aren't submitting their tecnology (in general) as a paper
to a peer-review board, but as a product to market. While researchers for
these companies can and do publish, the end goal is profit, not publication.

~~~
denzil_correa
The more I think about it, the more I feel "Information Asymmetry" is the root
cause of such problems.

[https://en.wikipedia.org/wiki/Information_asymmetry](https://en.wikipedia.org/wiki/Information_asymmetry)

------
onceKnowable
There’s precedent here. Einstein notified the US President when it became
clear as to the danger associated with, what were then, recent developments in
high-energy particle physics. That allowed politicians, who would have had
very little knowledge of physics, to do their job effectively according to the
“new landscape” that such developments presented.

The mistake he made was that he _only_ alerted authorities. Which lead to atom
bombs being developed in secret without any feedback from the public as to
whether this was a direction that they supported.

What we need in 2018 and beyond is for current and future developers of new
technologies to alert both authorities and the public so that politicians can
legislate for that technology’s use with public input as to what limits are
deemed appropriate by the public at large.

The deep fakes example highlighted elsewhere in the thread is a perfect
example of this. How can politicians legislate for this technology when they
don’t even know it exists? How can the public indicate to those same
politicians that they feel that deep fake technology used to create revenge
porn is something that the public wishes to be made illegal?

Laws, and society as a whole, are a feedback mechanism that rely on
information. Those with that information have a moral obligation to alert the
public to consequences that will affect them all.

~~~
carry_bit
> The mistake he made was that he only alerted authorities. Which lead to atom
> bombs being developed in secret without any feedback from the public as to
> whether this was a direction that they supported.

The risk there is that if you dithered the enemy could get it first.

If Pandora's box exists, there is only so much you can do to stop someone from
opening it.

~~~
onceKnowable
His disclosures were before the war. If the public had a chance to chip in,
along with a public debate that would inevitably lead to our current MAD
theory, then it’s reasonable to imagine that the current “let’s not go to war
with each other anymore because MAD” might, even before WW2, have been “let’s
not go to war with each other anymore, or develop these expensive yet
terrifying weapons, because MAD”.

The key point is that without the public’s input, politicians acting
rationally, decided to direct their engineers & scientists (also acting
rationally) to develop these weapons in secret, with everyone involved knowing
that nukes were “extremely powerful” and “terrifying” and “war-ending” but not
fully appreciating the whole MAD aspect that happens when your enemies play
development catch up.

Tons of technologies today, while not as much of an instant threat to our
civilization as nukes, are loaded with ethical concerns that current
politicians and the public at large are just not aware of. Just reference the
deep fakes saga for one recent example. There’s currently no laws outlawing
the use of deep fake techniques to produce revenge porn but there’s every
reason to believe that these techniques will be outlawed for such use in
forthcoming legislation now that the public and politicians are aware of the
risk. Extrapolate that to any number of innovations. With informed debate we
can keep laws ahead of the game and prevent new techniques and technologies
from being legally used for unethical purposes.

------
motohagiography
Surely submitting technical papers for critical analysis before publication to
ensure that problematic views are not given a platform or legitimized by
institutions, and so that affected and marginalized communities are given more
prominent equal representation, will help correct systemic injustices.

But sarcasm aside, a variation of the exact same quip could be made for a
privacy and other assessments. I would argue that it is on other disciplines
to keep up with what's going on in comp.sci to determine the impact of new
findings on their fields, and for all STEM fields to be very careful not to
invent administrative policing roles that can grant unmerited standing to
arbitrary political interests.

If you are in privacy, you need to be on this. If you are in economics,
political science, or equity fields, you would also need to be on this.
Technology makes everyone necessarily more interdisciplinary, and instead of
filtering out, they should be researching new developments that affect them.

------
wwhitlow
I like the intent behind this idea and think that it is important for ethics
to start becoming part of Computer Science. I'd be curious if someone with
more knowledge about medicine could explain how those publications handle
these dilemmas. Especially with regards to CRISPR Cas9 as that is probably the
most famous recent discovery that needs some serious ethical considerations.

My fear is that if Computer Science doesn't start acknowledging the ethical
consequences of the work being done that it will lead to a sharp increase in
regulations. This fear is primarily held with self driving cars which some
seem to have been rushed into production and have lead to some serious
consequences.

~~~
dekhn
CRISPR doesn't really mean the ethical considerations have changed. It's just
a tool that makes genomic changes easier (and the jury is still very much out
whether it can do so safely). Other tools already existed to do this, just
more expensive and more challenging to work with.

------
dekhn
CS should take a few hints from more mature fields (I don't mean that
pejoratively- it's just that it's a very recent field). In particular, let's
look at what biology did once it was technically possibly to splice genes. A
lot of biologists were terrified by what they believed the consequences of
gene splicing were (anything from genetic social dystopias and superdiseases).

They gathered a bunch of data and had a meeting in Asilomar (a pretty
conference center in the middle of nowhere on the CA coast). In addition to
the scientists who actually had the technical ability to splice genes (~100
humans at that point had the requisite skills), some lawyers and philosophers
also attended. They argued a bunch. And at the end of the day, they agreed on
a voltunary moratorium on splicing, even though the argument for immediately,
direct harm had not yet been made (precautionary principle).

Very reasonable predictions were made. Some of the more irresponsible claims
were tamped down as being overly irrational. Some basic rules were applied on
how to minimize collateral damage. Some strong rules were applied on disease-
causing experiments.

It seems, looking back with the luxury of historical distance, that they chose
wisely. It was probably because the level-headed people who focused on clear
and present dangers, not absurd hypotheticals, managed to win the day.

Today, the logical conclusion of those decisions are just starting to be felt;
we now have technology that permits us to make germ-line modifications (those
are permanent, inherited changes), and we're starting to do actual trials with
newly born children (this is, IMO, one of the most extraordinary moral and
philosophical developments of my lifetime).

It's unclear to me whether CS has the same level-headed maturity that lead to
this. Nor do I think people in CS can really predict the indirect outcomes of
their technology. Instead, I think CS people have an obligation to push the
technology as far as it can go within a community-determined set of
boundaries, and report the direct implications thereof to the general public,
who will then (indirectly through democratic mechanisms) create laws that
restrict certain applications.

~~~
sol_remmy
Most biologists are either in academic or government employed and do not need
to even think about money/making a profit.

People working in CS are mostly in industry and do not have 10+ year
timeframes to leisurely discuss ideas.

And what CS inventions are as impactful as atomic bombs (as someone mentioned
above?) I cannot think of any CS inventions that warrant such extended ethical
discussion.

The only exception would be computer scientists working on weapons in the
defense industry. They should absolutely be held back by ethical concerns like
this.

~~~
dekhn
actually, the asilomar conference touched quite heavily on the rapidly
burgeoning biotech industry.

I have a fair amount of expereince, being a biologist who did gene cloning who
now works on large scale machine learning models.

TBH I can see a wide range of surveillance-related technologies enabled by CS
that are as impactful (but not as explosively so) as atomic weapons (I think
you actually mean ICBMs, since atomic weapons aren't an existential threat,
while ICBMs are).

~~~
Kalium
I suspect you may mean that ICBMs with atomic weapons are existential threats.
Most people would not regard an ICBM equipped with a conventional explosive
warhead as being capable of annihilating humanity.

If anything, this should hint at the difficulty of forecasting. The full
impacts of some technologies cannot be understood without also knowing what
they synergize with.

~~~
dekhn
Yes, I mean ICBMs with atomic warheads.

That said, ICBMs with (conventional) thermobaric warhead could do a lot of
damage- you could effectively take out a medium sized country's industrial
sector in an hour.

------
nbeleski
Computer Science is a much bigger field than it once was. There are many
aspects where compsi is useful from a interdisciplinary point of view, and
this imho, entails working with researchers from different specific areas.

You shouldn't expect a computer scientist to deeply understand
sociological/psycological/philosophical questions, but they can and should to
some degree identify where knowledge from other fields is important. My
understanding is that they are not asking researchers to predict the future
(although this is apparently what the interviewee imply), but simply identify
that there might be (societal) consequences, good or bad.

This is why in areas like social robotics and HRI you will find more and more
articles that are written by researches from different areas. E.g. a Computer
Scientist and a Psychologist.

~~~
s-shellfish
Expect of others? No.

But are such questions important? Very.

------
probably_wrong
I considered a similar idea during my last conference, when a casual stroll
through the poster session left me thinking "I could do a lot of damage with
the papers in this room alone". A simple example: one poster about how to
change people's behavior with peer pressure over social networks, and another
one about how to detect depressed users. You could do a lot of damage
combining those two.

Ethics is not CS' strong point. Adding a statement "this is why I think this
research is ethical" doesn't seem like a bad place to start, if only because
it would force researchers to actually consider what they are doing.

~~~
slx26
well, when you use technology to uncover problems, you open the door to both
fixing them and exploiting them. knowledge is power, and power can be used for
whatever you want. and yet no one is thinking about pulling kids out of
schools.

------
Firerouge
I get the intent, but at the same time, doesn't disclosing all potential
negative consequences in the paper also provide a sort of guideline on how to
weaponize their research as well?

Seems like a double edged sword, potentially making it easier to negatively
utilize the research.

~~~
LeanderK
Sounds a lot like the "security through obscurity"-argument. I was never a fan
of it. If we don't research and communicate the dangers, potential issues
might slip! We need to be proactive in these things.

~~~
Firerouge
That's a good point. Security through obscurity doesn't prevent attacks, but
may raise the bar of entry and delay a motivated attacker.

I imagine if all 3D printers were distributed with warning stickers saying
that they can be used to print guns and knives that more people would try it
than if it was less publicized.

Making it just a bit harder to discover the knowledge to weaponize something
could give defensive researchers the time to build effective countermeasures.

~~~
onceKnowable
That’s why informed public debate is not about adding warning stickers to
things, it’s about creating competent laws that robustly protect the public.

In your example, the result shouldn’t be warning stickers, it’s laws
specifically against building guns with your 3D printer. Or if right to
weapons happens to be enshrined in your constitution, it’s aboyt legal
liability protocols being amended to take into account the fact that a
perpetrator used a weapon that they built in their 3D printer as opposed to a
regularly procured weapon.

------
dekhn
Since we can't predict the societal consequences, no, no we shouldn't
'disclose' them.

Also, this isn't the job of peer review. Peer review is hard enough, let us
focus on reviewing the technical details, since that's what matters.

~~~
onceKnowable
There’s precedent here. Einstein notified the US President when it became
clear as to the danger associated with, what were then, recent developments in
high-energy particle physics. That allowed politicians, who would have had
very little knowledge of physics, to do their job effectively according to the
“new landscape” that such developments presented.

The mistake he made was that he _only_ alerted authorities. Which lead to atom
bombs being developed in secret without any feedback from the public as to
whether this was a direction that they supported.

What we need in 2018 and beyond is for current and future developers of new
technologies to alert both authorities and the public so that politicians can
legislate for that technology’s use with public input as to what limits are
deemed appropriate by the public at large.

The deep fakes example highlighted elsewhere in the thread is a perfect
example of this. How can politicians legislate for this technology when they
don’t even know it exists? How can the public indicate to those same
politicians that they feel that deep fake technology used to create revenge
porn is something that the public wishes to be made illegal?

Laws, and society as a whole, are a feedback mechanism that rely on
information. Those with that information have a moral obligation to alert the
public to consequences that will affect them.

~~~
dekhn
Interesting point re: Einstein. However, I think what he did was literally to
ensure that the US won the war, not disclose a societal consequence. Since
that was an existential threat, it falls under different rules. Object
detection is not an immediatel existential threat.

Also, I kind think the secrecy of the manhattan project was essential, and
after carefully reading multiple histories, I am convinced that Leslie Groves
was a genius who did more than a good job managing the project. If you look at
how he declassified the project, they did everything right. THey had a
historian with access to all the data. And scientists who understood the
context. And they worked together to release as much information as they
possibly could, and even helped make the case for transitioning control of
weapons to civilians.

~~~
onceKnowable
Of course they were all geniuses. And the project as a whole definitely was
done in as ethical a way as possible. They weren’t bad people and I’m not
implying that they were.

But the point isn’t anything to do with any of that. The point is that without
the public’s input, these weapons were developed and used in the very first
place, without an informed debate to decide if this is how we want to conduct
ourselves.

This kind of debate was as common back then as it is today and back then it
lead to chemical weapons being banned after WW1 by the UN with the support of
pretty much every country.

And since then, for the few years that the US was the only one with nukes, if
not for the extreme restraint practiced by the higher ups, they might have
happily used nukes more times (at the time, there was huge pressure to just
nuke north Korea to end the Korean War).

Once it became clear that the “enemies” will catch up in the development race
meaning that any future war would be nuclear, the public debate clarified very
quickly upon the current philosophy: MAD is the only outcome in a nuclear war.

If this debate had been allowed to be fully conducted in an open and informed
manner before nukes were developed, we might live in a world where nukes were
outlawed at UN level in the same way that chemical warfare was, long before
they were even developed in the first place.

Extrapolate that thought to aggressive data-gathering & storage by social
media sites, genomic information ancestry services, tracking technologies and
techniques developed in the name of marketing, facial recognition technologies
by security firms, Three-Letter agencies recording and monitoring every web
user’s actions, profiling techniques to identify depressed users etc etc etc.
Right now laws are not robustly protecting the public from misuse of these
technologies. In fact, a lot of the misuse of the above technologies is
directly due to the fact that the politicians know that they’ve got a powerful
technology at hand and decide to develop it. And when informed public debate
happens, when the negative outcomes of misuse of these techniques becomes so
obvious to the public at large that they demand political action en masse,
laws with more robust protections for the public’s data will be forthcoming in
future updates. But those are the technologies that we know about. (At least,
we nerds!). What is currently in development that has similar potential to be
weaponized or misused that none of us know about yet?

~~~
dekhn
I gotta say, ultimately, I support nation-states who develop weapons to
address existential threats in secret.

I do so because I think any nation which doesn't do so will be replaced by one
that does, and surviving is better.

By the way, I've already kind of gone through these sorts of thought
experiments, and voted with my genome: I open sourced the data in my genome
voluntarily ([https://my.pgp-hms.org/profile/hu80855C](https://my.pgp-
hms.org/profile/hu80855C)) because I don't really ascribe to the "grim
meathook future" scenarios you're describing.

In general I think informed public debate is great, but in the specific case
of the Manhattan Project, I really don't think having an informed public
debate during the war was even a remote possibility.

~~~
onceKnowable
Einstein alerted the authorities about the dangers on Aug 2nd, a month before
the war had even started and long before the US had joined the fight.

But the authorities kept it secret, choosing to develop these weapons.

But, had those weapons not been developed in secret and an informed public
debate occurred before development, that debate would have inevitably lead to
the MAD doctrine. That is the only inevitable outcome of nuclear war.

Without the guidance that MAD provides, politicians in 1939 did what they
thought was the rational choice, and chose to develop these weapons so that
they weren’t left unprepared if the enemy developed their nukes. But with the
knowledge that MAD is the only outcome in a post nuke world, totally different
scenarios become possible. They may have chosen to take action at UN-level,
anti-nuclear proliferation treaties even before they got developed? Stifling
the public debate just delayed the notion of anti-nuclear proliferation
treaties but it did not stop them. The only result of an informed public
debate on nukes is anti-nuclear proliferation treaties.

If all that had been done in 1939 as opposed to the 50’s & 60’s, the war could
have been prevented before it even began!

Regardless of the crazy whataboutery, the lesson is that the public debate
resulted in the ethics being decided as: “nuclear technology leads to MAD when
used to create weapons, therefore laws are needed to protect the public from
misuse of nuclear technology”. This debate is needed for every new technology.

Technology moves fast, faster than ever these days. And currently, we’re used
to a situation where we typically operate these debates retroactively,
legislating against misuse after the fact, when new technologies are created
and then misused. Whereas, as this article is highlighting, the ethical way of
doing things would be: to have informed public debates that allow laws to be
created to robustly protect the public before a new technology is misused.

Note that nobody is accusing “new technologies” as bad. Or evil. Or anything
like that. The call is simply for creators to slow this public debate to take
place openly, to decide if the new technology can be misused and its
repercussions in the event of that misuse and if new laws are needed to
protect the public, before the possibility of their new technology being
misused is even a factor.

------
sandworm101
Compsci researchers arent the best people for predicting societal impacts of
technology. It is a different skillset. People are not anywhere as predictable
as machines. You need focus groups, blind studies, and linguists to build your
survey questions. And you need the patience to work with people who really
dont care to learn anything about technology. Many a predicted revolution has
been laughed away once the non-techs had their say.

Google glass.

~~~
goalieca
I’m pretty sure the people at Facebook fully understand what they were doing
when they gamed the ux to get people addicted.

------
daveFNbuck
Thinking back to my time in grad school, I feel that the list of things I
would have come up with as negative societal consequences would have at least
outed my political opinions and at worst made me look paranoid. You could
greatly mitigate the political aspect by not attaching a value judgement to
which foreseeable societal consequences should be disclosed. You might be able
to mitigate the second effect by having some guidelines about likelihood and
how long-term the consequences need to be.

Over time, people will probably turn this task into copying and pasting a
standard list for their research area. Existing researchers still mostly won't
have to think about societal impact, but maybe it'll have an effect on which
research areas people choose to work in.

------
PeterStuer
What is a negative social consequence? In the interview e.g. the 'destruction'
of jobs through automation is given as an example. I personally do not agree
that this is a negative. Freeing people from forced labor and allowing them to
have other pursuits I see as truly positive. The fact that we maintain an
unsustainable socioeconomic system in which one's right to live is tied
directly to one's role in a planet destroying economic production system is
the real problem.

------
Aaargh20318
What are 'negative social consequences' ? That seems like a value-judgement to
me and potentially highly political.

------
jaddood
A more reasonable thing to do is to form a council of computer scientists,
psychologists, sociologists, economists, politicians, and people otherwise
capable of having a useful opinion in the matter.

That council would then have the task of reviewing new technologies and their
potential impacts. The council may then have different subgroups each
discussing a certain technology, or possibly each with a certain specific task
such as: reviewing research papers, watching the developments inside
companies, writing legal propositions, etc...

This council could then be under a government agency, or better yet, an NGO
that can get funding by the government, from donations, universities, etc...

What would you think about that?

~~~
onceKnowable
Absolutely not!

A council, no matter how large, is, relative to the general public as a whole,
made up of a small number of individuals.

Such a council would raise so many ethical concerns, such as: Who are these
individuals? Why are they on the council? What are their biases? What does
their livelihoods ultimately depend on? What conflicts of interest do they
have? Among hundreds of other questions.

Only an informed debate among the general public can lead to laws that
robustly protect the public at large because vested interests get drowned out
by informed dissent from other, equally-qualified, voices who are not affected
by such vested interests.

------
imh
Since most people seem to be missing some context and commenting about the
headline, just be aware that the proposal is smaller. Most papers have a
section that essentially says "This work is important because of XYZ. It will
enable better ABC." They already talk about societal and technical
consequences, that's nothing new. The proposal is to not sweep the bad bits
under the rug and be as rigorous in that section as in the rest of the paper.
It seems fair. If you want to hype why your work is high impact, then you have
to be honest about the anticipated impact.

------
s-shellfish
This article is not about what could happen in the future.

This article is about what does happen now, and is stigmatized against being
disclosed due of fear of personal reprecussions.

That was never the point of academia. These institutions exist with all their
rules and regulations precisely because to discover and uncover, one must not
be afraid of how it will affect oneself from keeping a roof over one's head.

Presently, this article is being severely misinterpreted.

------
coupdejarnac
There was a story on public radio recently (maybe Radio Lab?) about
synthesizing voice. Adobe has developed a "Photoshop for audio". One of the
hosts interviewed a computer scientist on the subject and really pressed on
the ethical uses of such software. The scientist was flummoxed and had not
really considered the ethical implications before. I would not be surprised if
scientists do not consider how their research could affect society.

[https://arstechnica.com/information-
technology/2016/11/adobe...](https://arstechnica.com/information-
technology/2016/11/adobe-voco-photoshop-for-audio-speech-editing/)

[https://www.wnycstudios.org/story/breaking-
news](https://www.wnycstudios.org/story/breaking-news)

------
Derived
It is absolutely nonsensical to think that someone who creates a thing should
be held responsible for every negative use or misuse of what they create. It's
even more nonsensical to expect someone to come up with every single one of
these scenarios that could possibly happen. It's just not realistic, doable or
even helpful. At best, it will have an extensively negative effect as every
single new development will be tut-tutted into the shadows by a group of hand-
wringers, terrified of a user 7 sigmas away from the mean might do with it one
day. This is an appalling way to construct a society, let alone a discipline
that requires rapid, ground-breaking iteration.

------
valarauca1
I think a lot of the HN community seems themselves too much like grand
inventors. Prometheus bringing fire down from the mountain ignorant of the
future of war, gun powder, and bombs.

My career in technology has taught me its a lot more obvious. When your
marketing team start buying massive collected data, or your sales team starts
selling ad data to 3rd parties. We know things happen, and the grey morality
of it. Yet we turn our backs to it.

The reason I think HN is so quick to jump on these topics is because if we
search ourselves we see that our morals rarely stand up to our self-proclaimed
ideals. Jobs are sometimes hard to find, stability is nice, and dying on that
hill doesn't always seem important.

Those are negative societal consequences we are most dishonest about it. Not
Promethean illusions of grandeur.

------
payne92
This a tough one, since researchers don't control how their technology is
used.

And it's very difficult for even the experts to balance the positives against
the negatives: facial recognition could be used to find a serial killer ... or
a missing child.

As one example, RMS has long advocated against the "ethics" of certain
technologies and companies, while Emacs and GCC have greatly accelerated the
adoption of technology, generally. And those tools have almost certainly been
used directly or indirectly to build some very, very bad things.

------
maire
There used to be an organization called Computer Professionals for Social
Responsibility CPSR. It grew out of Xerox Parc. I just looked at their web
site and they don't seem as active anymore.

------
new_age_garbage
Warning: this sorting algorithm may lead to the proliferation of facism.

------
src3
I see this as a natural extension to the “Threats” section. In comp sci (esp
software engineering) conference papers are required to have a “Threats to
Validity” section to address the reproducibility and generalisability
concerns. NSF needs proposals to describe the broader societal impacts but
it’s typically to answer the question of — how will this research positively
impact society? But I’ve rarely seen it being used for negative societal
consequences..

------
bungie4
The ethics of a technology dont' have to be so lofty. It's a real kick in the
happiness meter when some code you have written puts ppl out of work. At this
stage in my career, I actively make arguments of why I (we) shouldn't do
something avoiding the intent of reducing labour costs.

~~~
carlmr
But basically the whole point of doing something in software is to automate
things that are labor intensive. If you're avoiding things that save work
you're not doing your job as a software engineer correctly.

It's kind of like using an Ox in modern farming because you want more farmers
to have work. You wouldn't do that because your farming business (barring some
great marketing that lets you recoup the costs) would just go bankrupt
quickly.

~~~
alxlaz
> But basically the whole point of doing something in software is to automate
> things that are labor intensive.

I see where you're coming from but that's quite an oversimplification, don't
you think? Software that drives a computer's graphic card, for example, which
in turn gets used to run software that enables a physician to explore a 3D
MRI, are just two examples of tasks that do not consist of automating manual
labour.

(You could probably pay a lot of painters to paint a lot of pictures but
hooking them up to the MRI might be a little difficult).

There's a lot of software that achieves _new_ things, rather than old and
well-known things really fast and without involving hands or a human brain.

I prefer to avoid discussing ethics on forums like HN, so I won't comment on
whether or not working on software that puts people out of work is "the right
thing to do" \-- suffice to say that, if you wish to avoid doing that, there's
plenty of room to do it while still doing your job as a software engineer.

~~~
clarkmoody
> There's a lot of software that achieves new things

Precisely. And the only way those new things are thought up is if someone has
the time to dedicate to the task. If that person instead has to spend 12 hours
a day foraging for berries just to stay alive, then they won't have time to
build 3D MRI code.

The gains to productivity brought about by mechanical -- and now digital --
automation have lifted billions out of poverty and extended their lifespans by
decades. I agree with GP that the only purpose for software is to automate a
human task. Lucky for us, software is doing things that humans can't do at all
(or would take years of effort, in the case of your MRI picture).

------
apathy
Should Nature disclose negative societal consequences (real, and present, not
predicted)?

It’s not like CS researchers are the ones turning science into a dumpster fire
to sustain 30% profit margins.

Most of the more hysterical screeching about AI, automation, etc is
reminiscent of buggy whip manufacturers bitching about cars.

~~~
onceKnowable
That attitude is what turns non-nerds away from asking our opinion on the
technologies that we develop.

If you don’t want to be involved in the debates about AI, automation et al,
then you can’t really comment on those who do. If you do want to chip in,
inform the public about the realistic dangers. What they need is realistic
information from experts so that they can deduce what risks are acceptable and
what risks are unacceptable.

Even the most untechnically-informed member of the public will get the
positives: AI, automation, (or whatever technology you’re developing) makes
software better in someway, doesn’t even matter how or why, it’s taken for
granted that you’re developing something that you believe will be a net-
positive.

But it’s the negatives that are what needs to be legislated for. Regulations
and laws don’t give technologies a pat on the back, they protect the public
from misuse of a technology. Your voice would be welcomed.

~~~
apathy
Automation will destroy certain jobs (as did cars for buggy whips). What
pisses me off (and I believe ought to piss off others) is the monetizing of
these types of works behind paywalls (as Nature and Science, the journals, are
wont to do) when public access to the primary sources would help inform
debate.

Reading tarted-up flag-planting arXiv papers can be tough going even for
practicioners, to say nothing of the general public. But ultimately the
general public will have to vote on propositions, or vote in representatives,
to pass sensible (not hysterical) regulations.

For example, after an Uber lidar-equipped vehicle killed a Phoenix jaywalker
(a separate part of law that I do not like, but acknowledges physics and
shitty drivers), Uber halted testing in most markets. This is sensible based
on public reaction, but three major problems compounded to take Elaine
Herzberg’s life:

1) the lidar did not function as designed, 2) the backup driver did not
function as instructed, and 3) Ms Herzberg did not obey the law.

This is a situation where the facts eventually became clear, and still it is
difficult to say that one side or the other overreacted (personally I think
Uber did the right thing PR-wise, but I also worry that less testing means
slower progress towards safer cars, since human drivers are legendarily shitty
and much less safe than conservative self-driving cars with functional lidar).

Now let’s consider more complicated problems, such as replacing medicinal
chemists with reinforcement learning systems, or China’s AI approach to
“social credits”. Both have potential benefits and huge potential drawbacks,
and both are relatively easy to grasp compared to some thornier cases. One is
purposely obscured due to a dictatorial regime; one is obscured not by
administrative fiat but by copyright law and a very silly tradition in
academic and quasi-academic research. Hiding results in paywalled “prestige”
journals rather than making it available to the general public is seen as a
mark of distinction (and avoiding a $30k-$50k open access charge is just
sensible conservation of research funds, in this scenario. I know because I am
an academic and I personally think OA charges are a waste, though for
different reasons.)

Anyone who wants to [break the law] can obtain the primary documents in the
former case via sci-hub. That’s just a fact. But is this any more reasonable
than Georgia allowing companies to charge for access to state laws? Is it
reasonable for people to pay twice, in most cases, first for the research* and
then for the report, when faster dissemination of primary sources can directly
inform debate over some important issues?

It seems silly that we would have a system where others are expected to pay
for, digest, and opine upon research done by the primary authors. It would not
be necessary if the norm was to disseminate AI breakthroughs in a format like
distill.pub, where the whole point is to explain both the results and their
context. But until academic and quasi-academic researchers can kick the
“prestige” habit, that’s unlikely to happen, and the public will be left to
listen to pundits (often idiots) rather than forming their own well-informed
opinions, and that sucks.

No time to edit, but hopefully you get the idea. Disclaimer: I am an academic,
applying “AI” to medical care for children with rare and deadly diseases; I
used to work at GOOG a million years ago; and what I see among the worst
hucksters sickens me, because I think it’s going to kill people in spite of
better competing ideas that could improve the human condition. I’m just one
person with one vote. I’d like everyone else to form their own opinions,
because theay might change mine. Thus primary sources are critical.

* anyone who does not believe that taxpayers subsidize industrial & corporate r&d via tax credits and incentives is quite naive; it is not different from academic competition in that regard.

------
User23
Negative by what metric? And how on earth is a computing scientist supposed to
know the social consequences of his work, good or bad?

Doing this would just be feel-good virtue signaling nonsense that would reduce
the overall quality of the literature.

------
clarkmoody
"Societal consequences" are an emergent phenomenon. By definition, they are
impossible to predict.

Often, the consequences of some new technology are both good and bad.

~~~
roflc0ptic
That’s not really part of the definition of emergent phenomenon. Yeah yeah,
you can’t predict everything. But you don’t need to be a visionary or defeat
the rules of complexity science to predict the NSA.

------
sharemywin
This way scientists can create a road map for ethically challenged CEOs and
VCs. So, that they can get the most profit out of the technology.

------
jerf
So, just to be clear, the HN gestalt is that economists generally don't
understand anything because "homo econimus" is a fiction and they don't
understand the impacts of even relatively simple-to-characterize policy
changes due to the complexity of a society made up of irrational actors, _but_
it is reasonable to expect computer scientists to correctly predict the impact
of their prospective research whose practical applications may not yet even be
a gleam in anybody's eye, and decide on that basis what should and should not
be researched and/or published and/or written?

The solution to this conundrum is obvious: Take the computer scientists who
can make these amazing predictions and move them into the field of economics,
where their predictive power can generally do more good for society. Then they
won't invent anything harmful _and_ can do trillions of dollars of good for
the world economy, which is way better than all but the most black swan-y of
research projects they could have come up with.

I'd also observe, HN gestalt, that you believe that pure research should be
pursued because it is unpredictable where advances will end up being useful.
This contradicts the idea that researchers can guess the societal impacts of
their research. If researchers can successfully guess the societal impacts of
their research, then people _are_ justified in cutting pure research whenever
they decide it's useless, because the judgment process that led to the
conclusion "this is useless" is the exact same judgment process you're asking
researchers to do here. Accepting that these predictions can be made will
rather quickly result in the elimination of pure research. (Because, pretty
much by definition, it looks useless for the forseeable future.)

If you want to decide this is a good idea and incorporate it into your
worldview, you're going to have to give up several other things to get there.
And have a frankly irrational view of the predictability of the universe, in
my view.

You can't even really escape from this by saying "But it's so obvious in some
cases that research is bad", because it really never is. Even germ warfare
research can lead to valuable medical insights, and you really never know what
valuable medical insights will lead to something that helps germ warfare. The
idea that researchers can predict the future and should exercise some social
responsibility by doing so is not some sort of major moral advancement; it's a
childish regression from something we figured out decades ago, demonstrating
our further scientific decay.

(The other inevitable objection is probably around nuclear bombs. Had the
Manhatten Project never existed, there is no world in which we'd be sitting
here in 2018 with no nuclear bombs. The only change would be who had them
when. Nuclear bombs do not exist because of man's hubris and disregard for
consequences. They exist because _the laws of the universe permit them_.)

~~~
forapurpose
> economists generally don't understand anything

I think this is taking hyperbole literally. The world is not at all completely
unpredictable; it's highly susceptible to reason and science - in a way,
that's the entire enterprise of the Enlightenment and its child, science.
Predictions aren't perfect, but they are hardly useless. Your world is built
on them; your alarm will go off; the train will be roughly on time and won't
collide with other trains; investors invest based on economic and business
predictions; software developers develop based on market predictions and
predictions of user psychology; we can predict how well UI's will work,
hardware failure rates, etc. The list goes on forever.

In particular, economics has provided, especially since WWII, a period of
economic growth that is orders of magnitude beyond what humanity ever
experienced before. Depressions used to be a regular occurrence; they have
stopped. Systemic bank failures no longer happen, we know how money supply
works, and the impacts of various policies much better than throwing darts at
a graph. Recessions are not as bad; sustained growth is a norm, which is a
miracle. The growth of the world economy, lifting billions out of poverty for
the first time in history, is beyond all but the most visionary dreams of a
century ago.

Do economists and economics get it wrong sometimes and to varying degrees?
Absolutely. Unfortunately, there is nothing perfect in the world.

------
patagonia
“Should investors disclose negative social consequences?”

------
ggg9990
The fact that the article says that “oil and tobacco” are the Bad Industries
(that CS shouldn’t become like) says everything you need to know. There is no
CS without oil.

------
chatmasta
Isn't this exactly what Google Cloud, EC2, etc are? i.e., Google and Amazon
selling access to their internal infrastructure, or something closely
resembling it?

------
blablablerg
Yes.

------
megaman22
I guess I can write down a list of wild-ass guesses of potential things that
might happen if you use my software on the readme.md.

------
EatYourGreens
Phillip Rogaway gave an interesting invited talk [1] (and an accompanying
essay [2]) on a related topic at a cryptography conference in 2015. It is
quite an entertaining read, even if you are not a cryptographer.

[1]
[https://www.youtube.com/watch?v=F-XebcVSyJw](https://www.youtube.com/watch?v=F-XebcVSyJw)

[2]
[http://web.cs.ucdavis.edu/~rogaway/papers/moral.html](http://web.cs.ucdavis.edu/~rogaway/papers/moral.html)

