
New ethics courses in computer science - htiek
https://www.nytimes.com/2018/02/12/business/computer-science-ethics-courses.html
======
ChuckMcM
I grew tired of arguments supporting fleecing the users that were basically
"We aren't making them do this, they choose to do it." I have heard them put
forward at nearly every company I've worked at, at various levels and through
various departments. At Google it was always "We don't take an editorial
stand, this might be just what some of our customers want." That has been the
most interesting aspect of their recent moves to either sanction advertisers
or block them. So somewhere in there it has gone from "If people don't want to
see ads there are lots of adblockers out there to choose from." to "We need to
take a stand against abusive advertisers." And that is a huge difference in
approach.

Do engineers need a Hippocratic oath? I'm not sure they do, but if we forced
some liability on companies for what the software they sell does, that would
change a lot of things fairly quickly.

~~~
kbenson
> At Google it was always "We don't take an editorial stand, this might be
> just what some of our customers want."

It's an interesting choice for companies, and Google in particular. Either be
proactive and get accused of forcing your customers behavior or having
ulterior motives based on money (e.g. Google's ad blocking program), or let
them do what they want and get accused of turning a blind eye because it makes
more money.

That said, I have little sympathy for Google in this case. They made the
choice to go for ad revenue as their business model years ago, and it may have
been the most feasible path to success when they did so, but that doesn't mean
the perverse incentives weren't obvious at every single step along that path.

I work in the event ticketing secondary markets. That is, I work for a
brokerage and buys and sells event tickets for a profit. I mention this
because a lot of people have a _very_ negative view of this industry (some of
it misinformed, some very well founded in the actions of some bad actors). We
run an above-board shop and make money through lots of analytics and targeted
investment, and I sleep fine I night. I'm not sure I would if I was employed
in certain departments of Google or Facebook.

~~~
mulmen
I'm curious, what value does your ticket brokerage create?

~~~
kbenson
For customers, liquidity and price accuracy, ticket availability, and the
chance for discount tickets (in the case where brokers make a bad call or
execute badly, which happens often).

For venues and promoters, guaranteed attendance and immediate cashflow (sell
50,000 tickets at an average of $80 immediately instead of spread over 9-12
months, that's money that can be invested back into their business or
something else, and reduces risk).

For artists and promoters, the capability to hold back chunks of inventory for
later sale on the secondary market at increased cost. This allows them to take
advantage of a functioning market to make more money while also avoiding fan
displeasure at high ticket prices. Also, the ability to say they sold out X
size venue in Y time, which can denote popularity (or be used to claim a level
of popularity).

Brokers take on risk for a possible reward. If you're buying tickets that
aren't intended for _immediate_ resale, and are holding them for 9-12 months
(common), anything can happen in that time period. That artist may become less
popular, or even just get sick and cancel much of the tour (in which case you
just had your money tied up for months, at best losing out on other
investments and at worst paying some percentage on a credit account), which is
a loss.

It's not really all that different than other financial markets.

Edit: As much as some artists like to complain about the secondary market,
there's a really simple solution that just works. Increase supply. Garth
Brooks plays twice a night and multiple days in a row in the same venue at
each stop on a tour. Kid Rock will play seven consecutive days in a row in
Detroit. The downside? They move the risk from the brokers to the venue,
promoters and artist, because they may lose money if they don't fill enough
seats. This itself is an illustration of the role the secondary market plays,
and indeed heavily bought events with inflated prices often get additional
dates added which depress the market prices.

~~~
throwawayjava
Engineering ethics takes two forms: "what should I build" and "how should I
build it".

I tend to prefer "engineering ethics" courses that focus on the second
question because the first question is completely parametric in more general
ethical and even political considerations.

It's true that engineers should take a course of study in pure _ethics_ to
learn how to think through questions of the first variety, but I'm not sure if
engineering departments are the right ones to house/teach that particular
course.

------
nemild
If useful, I wrote my own thoughts on ethics in software, after reflecting on
certain experiences over the years:

> A serial tech entrepreneur in Silicon Valley once asked me to design a
> “social stockade” for his financial services customers. It would lock people
> out of their social media accounts and tweet out/FB share to their friends
> when they hadn’t paid a loan. He pitched it to prospective employees as
> meaningful work that would reduce the cost of loans for the needy.

> I was horrified that his product was being built and that many others would
> likely take the role I was turning down. And he was hardly the first to
> pitch his “innovation” as providing only good.

[https://www.nemil.com/musings/software-engineers-and-
ethics....](https://www.nemil.com/musings/software-engineers-and-ethics.html)

If anyone ever wants to discuss something, feel free to reach out (see HN
profile).

~~~
nostrademons
Interestingly, many microfinance programs that are widely heralded as having
lifted many people out of poverty (eg. Grameen Bank, CARE) rely heavily on
peer pressure to boost their repayment rates. They lend out to groups within a
village, and then if any one member of the group fails to repay the loan, the
group can't access more capital. This creates a strong incentive for other
members of the group to exert social pressure to make sure everyone pays back
their loan.

...which goes to illustrate the complexity of most ethical issues that arise
out of social systems. Oftentimes, something that seems cruel to an individual
within the system is actually in the best interests of the participants of the
system as a whole, and sometimes can even be in the long-term best interest of
the person themselves. And then whether you view such features as cruel &
unethical or necessary & beneficial depends on your perspective & role within
the system.

(This could also be taken as a synecdoche for capitalism itself, which on a
micro level is about as cruel as you can get - individuals compete in a race
to the bottom to do things more cheaply, and nobody will help you unless it
serves their interests too - but on a macro level is the most effective system
we know of for satisfying consumer wants.)

~~~
nemild
Absolutely, and I say that having worked in microfinance before receiving this
request.

But to me, that doesn't mean we engineers can't still draw a line somewhere,
especially if we are called on to participate. Just because peer pressure
works in some context, it doesn't mean that it is always the right choice, and
we need to debate the tradeoffs in different contexts (much like engineers
debate tradeoffs in any technical decision).

For example, the easiest way to cut the price of loans down would be to kill
anyone if they didn't pay; defaults — and loan costs — would fall
dramatically. This would immediately provide loans to many people who are
priced out. While that may be useful in some scenarios, it's not a system I
personally believe in.

~~~
justin66
> For example, the easiest way to cut the price of loans down would be to kill
> anyone if they didn't pay; defaults — and loan costs — would fall
> dramatically.

You state that as a hypothetical, but it's not like this has never been tried.
The loans handled by lenders who include the threat of violence in the
repayment plan are absolutely _not_ characterized by low costs.

~~~
nemild
But loans for these activities (such as a loanshark) have their own set of
risks that have to be factored in and affect the interest rate:

\- You may have no legal recourse and no collateral to seize

\- The loan may be funding risky or illegal activity with a high likelihood of
failure, which demands a higher interest rate

\- There may be no competition that drives the price down

Ceteris paribus, increasing the cost of non-payment should reduce interest
rates. If you relax the "ceteris paribus", then all bets are off.

Another way to see this is this question: if the lender had to forsake the
threat of violence, would the loan price go up or down?

~~~
justin66
Introducing the threat of violence isn't smoothly adjusting a variable in a
formula. It's introducing a gating factor that's going to keep not-desperate
people from dealing with you.

~~~
nemild
I'm happy to discuss with you offline (see my profile). The point I'm trying
to make is that increasing the ability for greater enforcement mechanisms,
should — on average — reduce the cost of loans. As I point out, there are real
debates about where to draw the line about what is appropriate lender
enforcement that I've personally struggled with.

I apologize that my example isn't perfect, and you're absolutely right, there
is selection bias, unless there is little recourse for other products.

------
bpicolo
Some recent TechnoScifi has done a really interesting job getting this sort of
stuff into the general public. Black Mirror, Altered Carbon are both terrific
and dive into tech ethics to different extents, and the outreach there is many
millions of viewers.

That's sort of an interesting potential take on this - how do you take ethical
questions and get drastically wider outreach for them (vs a static class).
Revisionist history has a recent podcast along how satire sort of lives in an
interesting realm here (and how modern, western satire seems to miss the
mark).

[http://revisionisthistory.com/episodes/10-the-satire-
paradox](http://revisionisthistory.com/episodes/10-the-satire-paradox)

~~~
kerkeslager
> Black Mirror, Altered Carbon are both terrific and dive into tech ethics to
> different extents, and the outreach there is many millions of viewers.

Also _Electric Dreams_.

------
spodek
I like their goals, but the traditional academic implementation described in
the article and in the syllabus the article linked to won't achieve them.

If you want people to learn behavior, you can't lecture them into it, nor will
talking about case studies or writing papers help. Look at the behavior those
classes teach: analysis, reading, writing, debating other people's behavior.

Active, experiential, project-based, exercise-based learning will do the
trick. Many professors think "flipping the classroom" or having more class
discussions is active or experiential, but it rarely is.

You have to get students acting on their values, feeling their own values
conflicting with each other on projects they care about involving people in
their lives that they care about, having others depend on their actions,
having to perform on something they created, not spelled out for them in a
case study. Then they learn empathy, compassion, responsibility, initiative,
self-awareness, and ways to act in challenging situations.

If you want to be an artist, you have to practice making art. Art appreciation
classes won't hurt, but they won't help, any more than reading about or
discussing lifting weights will build muscle.

The classes they describe are ethics appreciation or leadership appreciation.
Well-intentioned, but limited.

~~~
ebenrock
This is very much like the gender imbalance in tech - starting to address it
in college only provides a band-aid. This is a much deeper issue that starts
soon after birth.

As a CMU grad student they had the Reasonable Person Principle to help guide
your actions and interactions. The principle states nothing about ethics, but
generally that you should be open to others' concerns, practice self-
reflection, and even accept that their viewpoints differ from yours. In spite
of this principle being in place I definitely dealt with at least one very
unreasonable person during my graduate studies there.

I've been in several ethical conundrums in my career. In a couple cases my
choice was the "lesser" unethical option among many. It can be difficult to
make those choices when your career or employment is on the line. I've left
jobs because I believed (or knew) the work I was doing wasn't quite on the
level.

Trying to recreate these scenarios, realistically, in a class room is pretty
hard. Having discussions about ethics is nice, but probably not very
effective. Could you build a course, or assignment, where the only way to get
an A is to cheat or act unethically? Would that even be ethical for the
university to offer?

In my cynicism, this looks like a "cover-our-asses" maneuver by universities,
at least in part.

~~~
rvo
The Reasonable Person Principle was the best thing I learned from CMU

* Everyone will be reasonable.

* Everyone expects everyone else to be reasonable.

* No one is special.

* Do not be offended if someone suggests you are not being reasonable

------
ilamont
I attended the ARinAction summit earlier this month, and heard an interesting
tidbit from the MIT Media Lab's Pattie Maes: She requires new students joining
her program to watch Black Mirror (1).

In contrast, when I attended business school one of the models held up to us
in the very first week was the team at Harrah's who designed a loyalty program
for frequent gamblers (2). I remember one of the professors or someone in a
video interview we watched crowing, "it was like printing money."

Neither the case nor the instructor had anything to say about the fact that
this was basically a technology-driven scheme to extract as much money as
possible from members of the public, including gambling addicts and other
vulnerable populations. The "big question" at the conclusion of the case
reads:

 _When asked about the company’s long-term vision for its RM system, a member
of Harrah’s RM team became thoughtful for a moment. He responded that, while
all of the near and longer-term developments described above were critical, he
thought there was one important aspect of all RM systems that needed further
development. "What I’d really like to know — and I pose this as a question for
researchers in revenue management — is how to integrate information about
price elasticity into these systems. Clearly, changes in price affect the
level of demand we experience. However, none of the systems we are familiar
with capture this effect."_

1\. [https://theoutline.com/post/3167/black-mirror-mit-
class?zd=4...](https://theoutline.com/post/3167/black-mirror-mit-
class?zd=4&zi=rgwp4nu7)

2\.
[https://pubsonline.informs.org/doi/pdf/10.1287/ited.1090.003...](https://pubsonline.informs.org/doi/pdf/10.1287/ited.1090.0031cs)

------
subroutine
Whenever I hear about these ethics courses I'm mainly curios about what non-
obvious substantive content being taught (because there is apparently enough
to fill a semester-long course). Anyone who has taken one of these courses
care to share something they learned that, before the course, had never
considered?

~~~
lumberjack
The philosophical aspect of ethics is not void of substance. There are many
ways to think about the ethics of a situation. You learn different frameworks
of ethical thought and you gain new perspectives. Some people who are very
ideological will write this off as useless bullshit, but only because they are
very invested in only one perspective and not open minded enough to consider
the merits of other ethical frameworks.

To give you a programming analogy, it is as if you always programmed
imperatively, using C, because you learned that organically as you grew up,
and then you take this class and you learn about functional programming and
object oriented programming and you learn how you can think of the same
problem from a completely different perspective.

Except with the crucial difference, that the end result will not necessarily
be the same and in fact there is not always a right answer. But when there is
not always one right answer it is better to know many possible answers and why
they are possible answers, than only one such answer.

------
seabird
All of these noble ideas of ethics and social responsibility are great for
everyone that feels bound by them themselves. Many of the rest can't be
bothered. People who have to be taught that the missile they're building is
going to kill people and that they should feel bad about it probably already
knows that the missile they're building is going to kill people and they _don
't_ feel bad about it. You see this issue come to a head in computer
technology because much of it doesn't have absurd cost/precision requirements
like weapons, drugs/pharmaceuticals, etc. have.

~~~
taurath
I think people come up with all sorts of justifications for why what they’re
doing isn’t wrong. Concerted effort to knock down those justifications and
socially shame those that cling to them does work - consider all the people
who don’t want to go work for an ad company or a defense contractor, or those
who leave with one of the reasons being to escape the industry that they know
is wrong.

~~~
peoplewindow
I think the opposite - people come up with all sorts of justifications for why
what _other people are doing is wrong_. This lets them feel self satisfied,
virtuous and perhaps a little smug effectively for free, and if their position
is perhaps a little thinly thought out, well, no big deal, it's not like
anyone is going to listen anyway.

I've watched many attempts to tar entire industries as evil over the years.
Invariably the people doing the tarring look foolish or naive - like they
can't think more than one step ahead, or like they live in a world where
tradeoffs do not exist.

To pick just the two examples you chose: without defense industries countries
would be ripe for being taken over by even a slightly aggressive invader who
would immediately commit all sorts of horrible atrocities. That's why defense
exists. Given that countries have been invading each other for thousands of
years, it's a massive stretch to believe we are in a post-war society and
people who attack defense workers invariably never try to argue that. They
don't seem able or willing to think the next step ahead: "ok, everyone refuses
to build weapons.... then what?"

And as for ads, if you remove all ads from the internet, TV, cinemas etc then
all those things would suddenly become way more expensive. Good luck affording
an internet connection if your daily browsing habit isn't being subsidised by
advertisers anymore. That would be a fast way to ensure nobody poor could use
the internet. Do you hate the poor? Probably not: more likely you never
thought about the consequences of not having advertising.

~~~
Maybestring
>Good luck affording an internet connection if your daily browsing habit isn't
being subsidised by advertisers anymore.

What portion of ISP revenue is from advertising? I suspect it is vanishingly
small.

~~~
peoplewindow
ISP revenue - none.

Revenue for all the free services and sites the ISP connects you to - almost
all of it.

------
mikegerwitz
I'm giving a talk in March at LibrePlanet 2018 entited "The Ethics Void". The
lack of ethics in CS education is a core component of the talk. I'm neck-deep
in my talk research right now, so if you are a, student, educator, or anyone
else with thoughts on ethics in CS, I'd love to hear from you:

[https://mikegerwitz.com/talks](https://mikegerwitz.com/talks)

Unfortunately, the codes of ethics, courses, etc that do exist largely ignore
user freedoms (in a software freedom sense) and the host of ethical concerns
that come with it. If you have examples that _do_ address those issues, I'd
really appreciate hearing about it.

And please join us at LP2018 (hosted at MIT)!

------
zombieprocesses
We already have ethics in the philosophy department. I'm was CS major and I
took ethics and liked it so much I double majored in CS and philosophy.

Ethics has no place in CS, nor more than ethics is required in biology or
physics or algebra.

If universities want students to learn about ethics, then make ethics 101 a
"required elective".

~~~
MereInterest
Your comparisons are rather odd, given that most universities do have ethics
requirements for the sciences, focused on examples from the field in question.

Biology: Do not repeat the Tuskegee experiements. Do not be the next Andrew
Wakefield.

Physics: Do not falsify data. Do not plagiarize results. Do not play fast and
loose with statistics.

I can definitely see corresponding examples being made for issues that affect
CS.

CS: Do not collect customer information that is not needed for the task at
hand. Do not describe your machine-learning model as being free of bias based
on race/sex, if you was trained with real-world data that may be biased.

~~~
Zak
The examples from other fields offered here relate purely to the academic
field, while one of your CS examples is very much related to software as a
business.

Biology as a business: don't try to patent the world's food supply.

Physics as a business: consider whether your client is building a weapon out
of your work and whom they might use it against.

------
thomastjeffery
In an IT-related class I had in High School, part of the course was a lesson
in ethics.

Part of the "lesson" taught that if you have a good/unique idea, you should
patent it, lest someone else get the value from it before you do, and that you
should keep your software closed-source lest someone pirate it.

There was no voice for free software or against the absurdity of software
patents outside my own vocal retorts.

This was part of a district-wide course, and probably contained popular ideas
used by many other districts.

Another thing I see in schools/colleges is that Microsoft will give free
licenses to the school and students for their software so long as it is used
and taught. Free software takes a backseat, and people are taught to use - and
prefer - Microsoft's proprietary tools. Microsoft gets to control their target
audience, and be seen as doing something generous, not abusive.

------
QML
Honestly, there should just be a required ethics component to all college
curriculums; I am not sure why it needs to be technologically focused.

I would actually say the tech industry is the least of our worries with
concerns to ethics; the last two years, tech has been heavily criticized and
as a result it seems that people in the field are willing to change.

Can't say the same about any other industry.

~~~
spydum
Is this not common? Even at junior collgss ethics is typically a first year
requirement

------
wu-ikkyu
"I am convinced that if we are to get on to the right side of the world
revolution, we as a nation must undergo a radical revolution of values. We
must rapidly begin [applause], we must rapidly begin the shift from a thing-
oriented society to a person-oriented society. _When machines and computers,
profit motives and property rights, are considered more important than people,
the giant triplets of racism, extreme materialism, and militarism are
incapable of being conquered._ "

-Beyond Vietnam (1967), Dr. Martin Luther King Jr.

[http://www.americanrhetoric.com/speeches/mlkatimetobreaksile...](http://www.americanrhetoric.com/speeches/mlkatimetobreaksilence.htm)

It seems futile to silo this as a problem of "tech", when really it's a
problem of society at large: that profit motives are largely considered more
important than people.

~~~
dragonwriter
> It seems futile to silo this as a problem of "tech", when really it's a
> problem of society at large

Of course, but while it is broadly socially acceptable to criticize a
particular industry or technology, criticizing _capitalism_ is less
acceptable; indeed, redirecting frustration at particular (and changing
periodically) industries and away from the system itself is a key defense
mechanism.

------
fortythirteen
I tend to tune out every time a journalist cites the main problem with "the
dark side of tech" as "fake news".

Firstly, it's devolved into a term to play upon the confirmation bias of
people who think those who hold a different opinion than them did not arrive
there out of differing life experience, but because they must be either stupid
or evil.

Secondly, there are many dark sides of tech that are of greater importance
than what is usually a subjective assessment that news is fake; such as
unfettered personal data mining, engineered addictiveness, cooperation with
oppressive governments, and growing soft-censorship of users whose politics
differ from that of major platforms' operators.

------
lostcolony
That's one thing that, in hindsight, I quite liked about undergrad CS at
Georgia Tech. CS4001, Computer Ethics, was required.

It didn't try to push a specific worldview, but rather asked people to
consider the ramifications of technical (and business) decisions, to discuss
them, and to recognize the stakeholders beyond just the company paying. At the
time I went through it, DRM was a big topic, and big data concerns were
beginning to be (especially as noted that the problem was more than just what
you stored, it was what -everyone- stored, and the ability to correlate it;
you had to consider what else was out there). I imagine the latter now dwarfs
the former.

------
whatok
So the same places where all of these companies got their groupthink from is
supposed to fix the problem?

------
deckarep
Also watch Black Mirror on Netflix. Beyond the satire, dark humor and guilty
entertainment this show offers quite a lot to consider on this exact subject
matter.

~~~
pdkl95
I _very_ highly recommend this[1] _outstanding_ analysis of Black Mirror. It's
short, it explores how Black Mirror fits in the history of sci-fi, and is one
of the most concise explanations of the root cause of this kind of "tech
problem" (hint: it isn't actually caused by _technology_ ; it's how _people_
use it. Technology is just an amplifier).

[1]
[https://www.youtube.com/watch?v=hr9_DcO6G3A](https://www.youtube.com/watch?v=hr9_DcO6G3A)

------
woodruffw
I'm teaching a 1-credit class on ethical hacking[1] this semester at my
university, and I'm really glad to see this issue gain serious traction in
major CS departments.

I only wish it happened earlier -- I wanted to help develop a (more general)
CS ethics class around a year ago[2][3], but encountered resistance from
instructors and professors over perceived impracticality and adding non-
technical "burden" to the major.

[1]: [https://github.com/UMD-CS-STICs/389Rspring18](https://github.com/UMD-CS-
STICs/389Rspring18)

[2]:
[https://news.ycombinator.com/item?id=14680425](https://news.ycombinator.com/item?id=14680425)

[3]:
[https://news.ycombinator.com/item?id=14106201](https://news.ycombinator.com/item?id=14106201)

~~~
aoki
kudos.

ABET-accredited engineering programs have had to document instruction in
design ethics for a long time. anybody designing a good CS curriculum should
know that and be thinking about whether CS really ought to be different in
that regard.

berkeley has offered CS 195 [0] for more than three _decades_.

[0]
[http://inst.eecs.berkeley.edu/~cs195/](http://inst.eecs.berkeley.edu/~cs195/)

------
foxrider
I think it's a dubious waste of time, because forcing people to take ethics
classes wouldn't meant that they are going to stick to the proposed ethics.
I've took ethics class in my Uni back in the day because I was curious what it
was about and it also was a fairly easy one to pass. The prof was nice and
informative, but her conclusions were something I would disagree with all the
time, and I left the course only with knowledge of some historical stuff, but
none of my views on what's ethical and what's not had not been changed. If
anything I only got to solidify my position based on facts provided by her.
Unless people would adopt these ethics willfully there is no forcing them to.

------
workthrowaway27
I doubt these ethics courses do anything to change people's behavior.

------
samzeisler
This is incredibly long overdue and so important. My partners and I had
designs on creating an ethical framework for technologists years ago but
didn’t feel we had the platform or the reach to spread it. That’s no excuse
for sitting Idly by and doing nothing but nonetheless, we are thrilled to see
this now coming into play. I hope that the professors and professionals who
are contemplating this will come together and form an alliance to create a
national standard policy and statement around this.

------
drdeadringer
Someone recently suggested the podcast "Engineering Commons", and I'm
currently playing catch-up. They had an episode about "Ethics", in which they
discuss the topic with their guest.

------
tonetheman
It is interesting but this is in direct opposition to "make money fast and do
whatever you need to do." When shareholders come first there is no way ethics
will ever be involved.

------
jakelarkin
great, but how about also focusing on the MBAs and non-tech background
managers/execs that fill out the 5-10 levels of hierarchy above line engineers
at any BigTechCo. Not like bottom-of-the-rung SWEs necessarily have visibility
or control of the company selling some ads to customers using them for dis-
information campaigns. Or how about teaching fact-checking & propaganda
skepticism to ALL citizens.

too often the problems currently ascribed to "tech" are problems of society as
whole.

------
lev99
Ethics is already required for ABET accreditation. Almost every good United
States based university level Computer Science program is ABET accredited. I
took an entire two credit long course on ethics to receive my degree,
involving writing at least four essays. One of the essays discussed the
ethical considerations for writing code utilized by the military. The idea
that computer programmers can do evil is not new. While some universities are
creating new computer ethics courses, creating new computer ethic courses is
not new. How is this national news worthy? NYTimes has been increasingly
disappointing.

~~~
IntronExon
In the absence of regulatory bodies, professional sanction and the like, its
worth even less than most accreditations in the tech world.

~~~
aoki
ABET accreditation is for degree programs, not individuals. it sets curricular
standards.

most credible engineering programs maintain ABET accreditation, as it has been
required for graduates to become PEs [0].

[0] i recently heard that berkeley EECS is dropping its ABET accreditation, as
this is no longer true:
[https://eecs.berkeley.edu/sites/default/files/abet_letter_to...](https://eecs.berkeley.edu/sites/default/files/abet_letter_to_everyone_rev7.pdf)

------
ssebastianj
Don't forget "Dark Patterns" [0]

[0] [https://darkpatterns.org](https://darkpatterns.org)

------
bobthechef
I sincerely hope that consequentialist ethics won't dominate those courses.

------
pascalxus
I've been a software engineer for over 12 years and have never had to make a
single ethical decision. It's always, here's the spec: build it.

~~~
lovich
Whether or not you choose to follow orders _is_ an ethical decision.

~~~
pascalxus
The laws align pretty well with Ethics. Most things that are unethical are
also illegal and hence won't be built by any legitimate company.

How often do you think a company chooses to build something that's unethical
but legal? Have you ever heard of a engineer who said to the PM or boss, "hey,
i'm not going to build that because it's unethical". I've never seen that
happen. have you?

~~~
lovich
Any payday loan company? The financial service companies that figured out how
to obfuscate shitty assets and led to the 2008 recession? Companies that mine
your data and sell it to the highest bidder and rely on hiding privacy
settings or making it too complicated for most people to prevent? Companies
like Equifax that hold on to important personal data and don't even do the
bare minimum to protect it?

To be honest I'm not sure how to even respond to the statement that the law
aligns with pretty well with ethics. The law aligns pretty well with what the
people with power want and in any study of ethics you quickly learn that legal
!= ethical

------
purple-again
Always a losers game. You implement it with ethics in mind, I do not. I win
and you fall into obscurity. The law is all that matters. If I can not lose
what I take, there is no reason for me not to take it (so long as I'm still
chasing the 'fuck you money', morals are great after you have it).

I understand it, lots of people have dreamed big dreams of how great the world
would be if everyone else was just like them too.

~~~
KirinDave
Flip side of this: People who say this are usually folks trying to justify the
fact that they're resorting to underhanded, abusive tactics to compete with
talented, successful people who are not. It is the battle cry of the mediocre,
the hallmark of scammers, the ultimate admission of the untalented and
unworthy.

I know a lot of folks in fintech and blockchain, and we've all skated the
outer edge of what's defined by law. The folks who give a damn about setting
sustainable policy and not exploiting customers? Those are the folks who are
still around. Even big US national banks, notorious for their immunity to law
and enforcement, are starting to feel the pressure. Rumor is, Customers left
Wells Fargo in droves after the last kerfuffle and new account opening went
down substantially at Citi after their money laundering fine. The hidden cost
of bad optics is immense. And as data science makes the formerly invisible
behaviors of the world visible, society's going to get a whole lot more
capable of identifying "bad behavior" and punishing it.

To say "The law is all that matters", in the context of these industries, is
beyond naive. It's not just wrong, but it's leaving money and opportunity on
the table. It's bad business AND bad optics.

> (so long as I'm still chasing the 'fuck you money', morals are great after
> you have it).

If this ethos is so effective, why are you still "chasing?" Or is this the
rhetorical we? Or the royal we? I can never tell here.

~~~
kerkeslager
> Those are the folks who are still around. Even big US national banks,
> notorious for their immunity to law and enforcement, are starting to feel
> the pressure. Rumor is, Customers left Wells Fargo in droves after the last
> kerfuffle and new account opening went down substantially at Citi after
> their money laundering fine. The hidden cost of bad optics is immense.

There are two parts to a cost/benefit analysis, and you're only talking about
the cost part of unethical behavior, as if the benefit doesn't exist. With
Wells Fargo, Citi, Experian, Bank of America, etc., when their scandals went
down, did they lose more money in the scandals than they gained from their
antisocial behavior? I haven't tracked all of these cases til their end, but
at least with Wells Fargo, they did not.

More importantly, did the individuals who made the antisocial decisions, who
are protected by limited liability, lose money? I doubt it. Maybe they're not
"still around", but they're happily retired in mansions and there are plenty
of newcomers willing to do the same thing to get the same reward.

It's not 100% of the time: there are cases when bad behavior actually results
in a net loss. But I'd say these are anomalies and not the norm. And even if
they weren't, antisocial behaviors are profitable enough of the time that some
percentage of antisocial behaviors have a positive expected value when looked
at probabilistically.

~~~
KirinDave
> With Wells Fargo, Citi, Experian, Bank of America, etc., when their scandals
> went down, did they lose more money in the scandals than they gained from
> their antisocial behavior?

Wells: Yes, almost certainly. Citi: Good question. Every I've talked to from
Citi seems to think it was a disaster that hurt the business bottom line.
BofA: Not sure which BofA problem we're talking about here. Experian: They
walked away. They're a great example of how the government SHOULD have come in
and made it more expensive and then given that money back to people they hurt.

On Experian, I do know that their scandal with fake credit scores direct to
customer hurt that business badly.

> More importantly, did the individuals who made the antisocial decisions, who
> are protected by limited liability, lose money? I doubt it.

In some cases yes, in some no.

> But I'd say these are anomalies and not the norm.

We could make it the norm :)

~~~
kerkeslager
Keep in mind the chain of comments you're responding to started with:

> [I]f we forced some liability on companies for what the software they sell
> does, that would change a lot of things fairly quickly.

In that context, it sounds like you're saying we don't need liability
regulation, we need data science to help consumers make better decisions, so
that the economic downsides to antisocial action are higher.

This is the repeated lie of laissez faire economics: that if we can just get
consumers to become savvy they will stop giving money to bad actors and the
invisible hand of the market will enforce ethics without having to resort to
regulation.

This has never worked. At best, regulation finally steps in after the
companies have trashed people's lives and fines the company, and the people
who made the decisions are forced to retire with their millions. At worst, the
companies lobby successfully and their sociopathic business practices become
not only the norm but the standard. Laissez faire economics is typically only
espoused by businesses when they don't have the regulators in their pocket.

Forgive my cynicism, but I don't think data science is the missing piece that
makes laissez faire economics work. It's simply not realistic to believe that
the average consumer will become knowledgeable enough to make ethical
decisions on what they consume. Most people aren't savvy enough, don't care,
or don't have the time. I like to think I understand most issues once I have
the data, and I care, but I simply can't keep up with all the different
companies and their misdeeds. The invisible hand of the market simply can't
keep up with this problem.

~~~
KirinDave
> In that context, it sounds like you're saying we don't need liability
> regulation, we need data science to help consumers make better decisions, so
> that the economic downsides to antisocial action are higher.

I'm not sure why you make it sound like an decision. We're already seeing that
improved computing power and statistical methods, coupled with the falling
costs of these, are giving us transparency which can guide both consumers and
regulators.

Its one of the reasons I'm a fan of "legible" societies once the asymmetry if
power and information is overcome. We can all hold each other accountable.

If you check my comment history you'll see I'm a big fan of the CFPB and
generally want the government to make bad behavior more expensive, so I
appreciate the rest if your post but you're preaching to the choir. I'm
furious at what the Executive has done putting a scam artist at the wheel to
dismantle it.

~~~
kerkeslager
My apologies for a confrontational tone then. Your previous post came across
to me as being yet another defense of the idea that regulation is bad and
incentivized corporations will solve all society's ills. That's possibly more
a reflection of my own sensitivities than your communication. :)

I will say that a ton of changes which should be made are low hanging fruit
that don't need data science to prove. We don't need data science to prove
that bailing out corporations when predatory lending goes wrong, or that
fining corporations a fraction of the profits they made from money laundering,
are ineffective enforcement.

The underlying problem is that international corporations are, to some extent,
above the law, and that is a much harder problem to address.

------
ProAm
Such a good opening line, "The medical profession has an ethic: First, do no
harm. Silicon Valley has an ethos: Build it first and ask for forgiveness
later."

~~~
Alex3917
Yeah but both of those are just marketing postures that don’t really have
anything to do with the underlying realities of their respective industries.

~~~
dsr_
It's necessary to establish a policy before you can enforce it.

You can't argue about exceptions if you don't fundamentally support the
legitimacy of the policy, either.

"First, do no harm" is a policy statement. Once you accept it, you can make
arguments about tradeoffs (pain management, amputation, chemotherapy,
abortion), but without having a policy statement, there's no argument being
made, just a free-for-all mess.

~~~
Alex3917
> without having a policy statement, there's no argument being made, just a
> free-for-all mess.

Given that medical treatments are the second or third leading cause of death
in the U.S., how much good is the policy statement really doing?

------
muninn_
Instead of Harvard, Stanford, and MIT attempting to address "the dark side"
I'd prefer to see people who aren't graduating and going straight to work at
the companies who bring out the worst (and sometimes best) of tech's ethical
problems address this problem or at least be heard. Do we really need Goog.. I
mean Stanford lecturing us on ethics?

I guess it's good that they've at least somewhat seen the problem, even if
it's a problem because it harms future revenue streams. Maybe the NYT (which I
have a subscription to) is going to tell me next that Clinton has a 100%
chance to win the 2020 election?

------
rafiki6
Technologists aren't doctors. Doctors are a necessity because people have an
incessant need to stay alive. Technologists aren't a fundamental necessity to
society. We are business people. The only rules/ethics that govern us are
business rules/ethics. Some might argue, "but technology is required for us to
survive". It's not. We've survived plenty without it. Technology is required
for us to thrive.

~~~
dragonwriter
> Technologists aren't doctors.

Doctors are a subset of technologists.

> Doctors are a necessity because people have an incessant need to stay alive.

Medical care is an application of technology people are particularly willing
to sacrifice other things to pay for, but not without limit.

That's not the only application of technology for which they is true.

> Technologists aren't a fundamental necessity to society.

Hard to say that's any more true than of Doctors; you can have a society
without doctors or without other technologists, but if you do, people will
very quickly start assuming those roles.

> We are business people.

Most technologists are not business people. A few are, but that's incidental,
not fundamental.

> The only rules/ethics that govern us are business rules/ethics.

At least in formal terms, this is true of _information_ technology, but it has
nothing to do with necessity or lack thereof (it's not true of lawyers, who
are certainly not necessary to survival.) It's just that IT (unlike medicine,
law, or proper engineering) lacks professionalization.

~~~
hueving
Using technology != technologist in this context. Calling a doctor a
technologist essentially makes everyone a technologist and the term becomes
meaningless.

~~~
dragonwriter
You are welcome to pose a definition of technologist that included the people
you mean to include and excluded the people you mean to exclude, but by any
coherent definition doctors are as much technologists as IT workers in
general, though I can see some reasonable definitions that would exclude most
of both and only include a few of either.

But, in any case, the lack of formal ethical rules applicable to IT is not
about how necessary technologists are or aren't compared to doctors.

~~~
hueving
Just Google it, I don't need to propose anything.

[https://en.oxforddictionaries.com/definition/technologist](https://en.oxforddictionaries.com/definition/technologist)

~~~
dragonwriter
Neither of the two definitions at your link seems to serve your "include IT,
exclude medical doctors" purpose.

~~~
hueving
Doctors are neither experts in a field of technology nor are they paid to
operate technology.

