
A “Silicon Valley” actor is terrified by what’s happening in Silicon Valley - vogon_laureate
https://qz.com/1118377/silicon-valley-actor-kumail-nanjiani-is-terrified-by-tech-industrys-blase-attitude-toward-privacy-and-other-issues/
======
nemild
In the vein of Kumail's critique, if it's useful to anyone, I wrote some
thoughts about software engineering and ethics from my own experiences:

A serial tech entrepreneur in Silicon Valley once asked me to design a “social
stockade” for his financial services customers. It would lock people out of
their social media accounts and tweet out/FB share to their friends when they
hadn’t paid a loan. He pitched it to prospective employees as meaningful work
that would reduce the cost of loans for the needy.

I was horrified that his product was being built and that many others would
likely take the role I was turning down. And he was hardly the first to pitch
his “innovation” as providing only good.

Every software engineer I’ve worked with has had a strong sense of personal
values and ethics, but the organizations we work for can take actions that are
at odds with these. I’d like to highlight a few of the key challenges you’ll
face and provide feedback for living your personal values. Most importantly,
it’s critical that you think about the impact of your work and consciously set
your personal values in advance of inevitable future challenges.

(Always available to discuss if valuable to anyone, feel free to email me
encrypted messages, see my HN profile)

[https://www.nemil.com/musings/software-engineers-and-
ethics....](https://www.nemil.com/musings/software-engineers-and-ethics.html)

~~~
aphextron
>Every software engineer I’ve worked with has had a strong sense of personal
values and ethics, but the organizations we work for can take actions that are
at odds with these. I’d like to highlight a few of the key challenges you’ll
face and provide feedback for living your personal values. Most importantly,
it’s critical that you think about the impact of your work and consciously set
your personal values in advance of inevitable future challenges.

I'd agree with this assessment of developers. However I've come into these
exact moral dilemmas myself in the line of working for various companies doing
web programming. How did I resolve them? By making the decision that affected
my own bottom line favorably. When looked at in isolation, it's impossible for
a single developer to say no to the incentives of doing "evil" work. It won't
be until the field of software engineering really matures and develops a set
of standards bodies like other professions that engineers will have some sort
of protection. Physicians can refuse to perform a procedure they're told to in
the name of protecting their medical license. A software engineer should
likewise be held to the same standard.

~~~
59nadir
> I'd agree with this assessment of developers. However I've come into these
> exact moral dilemmas myself in the line of working for various companies
> doing web programming. How did I resolve them? By making the decision that
> affected my own bottom line favorably.

That would be _not_ having a strong sense of personal values and ethics. If
your good morals always lose out to the promise of more money, that's you not
having good morals. Just own up to it instead of pretending the real culprit
here is a lack of governing bodies that should be protecting you from all the
evil money out there.

~~~
TeMPOraL
You cannot build the world based on the strength of individuals' moral spine,
because a) there's plenty of people with weak ones who will happily outcompete
individuals refusing unethical work, and - the more important - b) people
respond to incentives. For instance, we have the law and the police _even
though_ people in general seem naturally good - they exist to counterbalance
incentives that make people do bad things. Saying "just own up to it instead
of pretending the real culprit here is a lack of governing bodies that should
be protecting you from all the evil money out there" ignores the most basic
fact about humans - people do _respond to incentives_.

Moral dilemmas don't look like "do I accept this high salary for doing evil,
or prefer a slightly smaller salary for not doing evil". Moral dilemma is what
a hypothetical Jane faces, after one of her parents died and the second is
unemployed, when she has to provide for the parent _and_ her younger siblings,
while also trying to start her own life with her fiancée. For her, abandoning
a high-paying salary that is actually eaten up entirely to let two families
live in stability and with dignity, is _not a trivial problem_. The idea of
having support of governing bodies is to turn such life-quality-threatening
situations into _easy_ choices, so that people don't get severely punished for
doing the right thing.

Also: ethics can be, and is, used to abuse people. In Poland, we have this
situation with paramedics, nurses and doctors on residency, who are severely
overworked (to the point it's a serious health threat, and causes an
occasional suicide), severely underpaid, and on top of that they're told by
everyone that they _can 't_ protest because that would mean providing worse
care for patients, which is obviously _unethical_.

~~~
dwaltrip
This is a great post. However, it seems also that it is impossible to build
the world based solely on official policy and law. We _need_ people to
continually make moral decisions in their own lives, to the best of their
ability. This involves making sacrifices and ignoring incentives at times.

How do we find this balance? I'm not sure. I think we may always be searching
for it.

~~~
TeMPOraL
I agree. What I mean is recognition of human nature. We do respond to
incentives, there's no working around that (at least for now; maybe we'll have
some mind-engineering tech in the future). We want people to do good, and to
achieve this we need to both strengthen individual sense of morality _and_
shape the incentives _and_ engineer systems that protect from pressure -
because we recognize no human is immune to incentives.

Will we find the right balance? I don't know, but I certainly hope so. The
balance we have now is dynamic - people do bad things, but not _enough_ of
them to cause a societal collapse. Incentives are recursive, too -
institutions are formed not out of abstract thinking, but because there were
enough bad things happening on that some people had a powerful incentive to
stop them.

~~~
dwaltrip
Fully agree, I just wanted to stress both aspects. I do think the systemic and
incentive based side of things is more difficult to understand, so it's good
to articulate those concepts. Either way, I think it's important to keep
having these conversations.

------
transitorykris
I love Stafford Beer's "The purpose of a system is what it does" here. It
separates fact from intention when reasoning about systems. The engineers that
Kumail catches offguard are (in my opinion) confused because the issues he
raises are so far from their intentions.

[https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...](https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_what_it_does)

~~~
Jun8
I really like this. Mirrors Wittgenstein's "the meaning of a word is its use"
maxim (see, e.g. [http://www.maciejratajski.com/work/the-meaning-of-a-word-
is-...](http://www.maciejratajski.com/work/the-meaning-of-a-word-is-its-use-
in-the-language)) in philosophy of language.

~~~
supernumerary
I love wittgenstein and agree with your gist.

Being trite - Wittgenstein also said 'The world is all that is the case' ... a
precept that a mission to mars for example certainly flexes, of course as soon
as it came to pass, the meaning of 'the world' would reconfigure itself.

To the point though ... another maxim that I often find myself employing is:
'The road to hell is paved with good intentions'.

------
leggomylibro
I think that a better look at this subject is a recent piece in The Atlantic
titled, "Are Facebook, Twitter, and Google American Companies?"

[https://www.theatlantic.com/technology/archive/2017/11/are-f...](https://www.theatlantic.com/technology/archive/2017/11/are-
facebook-twitter-and-google-american-companies/544670/)

tl;dr, this back-and-forth is unsettling to a Federalist:

>In response to a tough line of questions from Senator Tom Cotton of Arkansas,
Twitter’s acting general counsel, Sean Edgett, gave two conflicting answers
within a couple of minutes. Cotton pressed Edgett on Twitter’s decision to cut
off the CIA’s access to alerts derived from the Twitter-data fire hose, which
is provided through a company it partially owns, Dataminr, while the companies
reportedly still allowed the Russian media outlet RT to continue using the
service for some time.

>“Do you see an equivalency between the Central Intelligence Agency and the
Russian intelligence services?,” Cotton asked.

>“We’re not offering our service for surveillance to any government,” Edgett
responded.

>“So you will apply the same policy to our intelligence community that you’d
apply to an adversary’s intelligence services?,” Cotton asked again.

>“As a global company, we have to apply our policies consistently,” Edgett
replied. “We’re trying to be unbiased around the world.”

>Cotton then turned to WikiLeaks, which the Intelligence Committee has
designated as a nonstate hostile intelligence agency, asking why it had been
operating “uninhibited” on Twitter.

>“Is it bias to side with America over our adversaries?,” Cotton demanded.

>“We’re trying to be unbiased around the world,” Edgett said. “We’re obviously
an American company and care deeply about the issues we’re talking about
today, but as it relates to WikiLeaks or other accounts like it, we make sure
they are in compliance with our policies just like every other account.”

I guess it is kind of cyberpunk-y. All we need now is for US Marshals to lay
siege to some SF headquarters, facing contracted PMCs denying the validity of
their warrants.

~~~
sanbor
This is the second time that I read that WikiLeaks is classified as "hostile
intelligence agency". Isn't the purpose of intelligence to gather information
to gain advantage over adversaries? It sounds to me like a redundant term like
"hostile military". And BTW WikiLeaks leaks are public so all other
intelligence agencies and citizens can learn while state agencies try to keep
things obscure as much as possible. Who's hostile?

~~~
kharms
Wikileaks leaks are selective. "Public" only describes the things they
release. The hostility is in the choice of releases.

~~~
noam87
There's no denying they promote a certain narrative in their PR, but are there
known examples of leaks withheld by Wiki Leaks, where the whistleblower had to
go elsewhere?

~~~
kharms
Here is one example, where they refused to publish leaks from/about Russia:

[http://foreignpolicy.com/2017/08/17/wikileaks-turned-down-
le...](http://foreignpolicy.com/2017/08/17/wikileaks-turned-down-leaks-on-
russian-government-during-u-s-presidential-campaign/)

I recall another case when Wikileaks itself advertised a Russia leak and then
never released anything.

------
pmoriarty
Corporations only worry about ethics if they think it'll affect their
reputation or bottom line.

Even then many of them are happy as long as there's public perception that
they're acting ethically, even if in reality they're not.

~~~
untog
Corporations, yes. But employees of those corporations, less so. As a
developer making <insert ethically dubious thing here>, you could have qualms
about it.

~~~
pdkl95
Unfortunately, many engineers like to _pretend_ their work is apolitical, but
there is "No Neutral Ground in a Burning World"[1]. Everything is political,
eventually; technology patently has political consequences that are large in
scope, highly complex, and difficult to understand fully.

When you are implementing a new technology, please take some time to
_actively_ consider _all_ of the potential consequences you can think of.
Maybe your new tech just needs simple safety-considerations similar to power
tools (dangerous, _if misused_ ). Maybe your new tech will disrupt existing
structures that many people rely on; do you have a plan to handle that? Can
someone easily use your new tech to undermine civil rights or basic freedoms?

Obviously we can't foresee everything, but we can at least _try_ to consider
the potential consequences to the world, society, and _people_ outside of
tech. I know it's hard to walk away from a cool idea. It's even harder to do
the right thing when it risks your salary - or your safety. For better or
worse, this kind of question needs to be part of the new-tech development
process. As Dan Geer said[4][5] in a recent keynote, "You have not picked a
career. You have picked a crusade."

[1] I _strongly_ encourage everyone to watch this 30c3 talk[2] by Quinn Norton
and Eleanor Saitta (or read the transcript[3]).

[2] [https://media.ccc.de/v/30C3_-_5491_-_en_-
_saal_1_-_201312272...](https://media.ccc.de/v/30C3_-_5491_-_en_-
_saal_1_-_201312272300_-_no_neutral_ground_in_a_burning_world_-_quinn_norton_-
_eleanor_saitta)

[3] [http://opentranscripts.org/transcript/no-neutral-ground-
burn...](http://opentranscripts.org/transcript/no-neutral-ground-burning-
world/)

[4]
[http://geer.tinho.net/geer.source.27iv17.txt](http://geer.tinho.net/geer.source.27iv17.txt)

[5]
[https://www.youtube.com/watch?v=hcIiD4UUDE8](https://www.youtube.com/watch?v=hcIiD4UUDE8)

~~~
pmoriarty
The most overtly political act most software engineers are likely to make is
to choose GPL over BSD or vice-versa.

Walk away from a cool idea for ethical reasons? Risk their salary or their
safety? I'm sure such engineers must exist, but they're not among any I've
ever known.

Unfortunately, I am rather pessimistic. We're in this Orwellian/Huxlean
dystopia for a reason.

~~~
pdkl95
> they're not among any I've ever known

You just met one. In the mid 90s, when I was writing firmware and drivers for
HIPPI[1] networking equipment, the Malaysian government bought a bunch of out
hardware (indirectly), as part of a big system that was supposed to receive a
bunch of TV channels from satellite, buffer it for a couple minutes on disk,
and then re-broadcast the streams on local TV channels. The buffer, of course,
allowed for the "live" TV streams to be _censored_ on the fly.

This was before flash was cheap an ubiquitous, so the firmware for the FPGAs
was on a socketed EEPROM. Changing the firmware required removing the chip and
a computer with PROM programmer hardware. Bugs happened[2], and my boss
decided I needed to make "our biggest customer" happy by _flying to Malaysia
with an --undeclared-- bag of EEPROMs with better firmware_ and personally
updating their hardware.

It was my first "real" job, but I was still 3 more years of classes away from
my bachelors, so I was getting crap intern wages of $11/hr. When I started to
suggest I didn't really want to smuggle chips into an notoriously
authoritarian state to help them censor their TV, my boss immediately offered
me a 6-figure salary _and_ a $60,000 cash bonus. I was seriously tempted.
Instead, I gave them my 2-weeks notice the next day...

(side note: a few years earlier, the NSA offered me a full 4-year scholarship
(inc. housing and cash stipend), but the "you just need to work for us for
150% time (6-years) after you graduate" price tag made that offer a _lot_
easier to turn down)

[1]
[https://hsi.web.cern.ch/HSI/hippi/procintf/pcihippi/pcihipde...](https://hsi.web.cern.ch/HSI/hippi/procintf/pcihippi/pcihipde.htm)

[2] actually, it turns out our PCI interface chip was faulty[3]

[3]
[https://news.ycombinator.com/item?id=10609033](https://news.ycombinator.com/item?id=10609033)

~~~
MichaelGG
That sounds very interesting! If you were working with their government, why
did you need to smuggle them in?

~~~
pdkl95
_We_ \- as manufacturers of HIPPI NICs and crossbar switches - we were
technically just supplying OEM parts to a Canadian company. They were the
people trying to sell a giant turn-key censorship solution to the Malaysian
government. I believe we were simply trying to "help our biggest customer"
with an IBM-style field service tech.

Also, this was a rush fix. Flying me on the next flight was supposedly faster
than the usual shipping/courier services? At least that's what the boss
believed? I never got the full explanation, but I suspect that the Canadian
company was ~30-50 hours away from losing the entire deal, which was supposed
to be our main (only?) source of revenue until we could ship an important new
product[4] a few months later. With the benefit of hindsight, instead of a
rational, sound plan, the whole situation smells more like a panic about
losing VC funding.

[4] We had re-purposed 8 & 16 port crossbar switches where each port could be
(mix-n-match): HIPPI, FC, ATM, 2xFDDI, 2xSCSI3 (!), 1000BASE-T, or 8x(100BASE-
TX/FX) with arbitrary layer-3 routing and arbitrary tunneling. Two switches
with SCSI and ATM ports did something similar to FCoE/iSCSI... in 1996. RAID 1
over 10km of fiber was awesome. Except it never shipped, because everyone was
focused on our broken NIC. --sigh--

------
madmax108
It's scary (Stanford prison experiment-like) how easy it is to convince a
regular joe employee that what he's working on is "for the betterment of the
world", while that statement is very very fuzzy to say the least (and as
recent events have shown) where one man's Utopia is another man's surveillance
state. We put waaay too much impetus on "world changing ideas" while
completely ignoring the real life implications of them, as the article
rightfully points out.

Reminds me of the statement that Noam Chomsky ended his recent talk at Google
with (which I consider one of the nicest "burn" moments ever):

Interviewer: It's not everyday that a non-google gets to sit in a room full of
people who work at google, and are s/w engineers, and are advertising experts,
and are market experts in different fields. Do you have anything that you'd
like to ask us? Chomsky: <shrugs> Why not do some of the serious things?

[https://youtu.be/2C-zWrhFqpM?t=59m16s](https://youtu.be/2C-zWrhFqpM?t=59m16s)

------
MrBuddyCasino
This just in: actor discovers corporations don't have a soul.

Why don't you come over here to Europe, we have strong data protection laws.
Oh btw you'll have to cut your your salary by 50%, because no Silicon Valley,
and because strong data protection laws. Hows thats sound to you?

~~~
dmix
Most data protection laws just force ineffective bureaucratic processes on
companies in the name of "doing something", usually in the form of outdated
checklists. Meanwhile intelligence agencies, hackers, and ad companies are
still consuming vast quantities of data uninhibited.

It really does largely just slow companies down and forces gov agencies,
doctors, banks, and other critical industries to use old insecure software and
inefficient corporate processes. RFP'ing a new system is suddenly 2x the cost.

I'm all for strong privacy and property rights at the consumer level, plenty
of law in this area in most countries is very outdated and from another era.
But the tendencies of european and modern US/UK/Canada/etc countries are to go
well beyond the courts and intervene directly in company operations.

I've never heard anyone praise the fact they followed gov-mandated checklists
as for why they prevented a hack.

It's always keeping up to date with the latest industry best practices and
_caring_ because there are real costs. Strengthened courts and a caring media
will create real costs and incentivized best practices.

------
thomastjeffery
The heart of the problem is that popular communication networks are
centralized and unencrypted.

~~~
dredmorbius
Good protocols don't cure bad ethics.

Mind: I would like to see decentralisation and better encryption protections,
generally, but those by themselves _won 't_ solve the problem (you still need
to address ethics, rights, and power imbalances), and still leave certain of
present attacks in place.

David Gerard, author of _Attack of the 50 Foot Blockchain_ , makes the
exceedingly good point in an FT interview, that Bitcoin and Blockchain
themselves are attempts to tech around the trust problem, and that this raises
challenges _because trust itself buys you an immense amount of efficiency.

[https://www.ft.com/content/61cdc5c8-370e-11e7-bce4-9023f8c0f...](https://www.ft.com/content/61cdc5c8-370e-11e7-bce4-9023f8c0fd2e)

I'm starting to come to the view that information technology itself directly
_attacks* interpersonal trust, at the macro scale. I may not be articulating
the argument well at this point, though I've tried:

[https://www.reddit.com/r/dredmorbius/comments/6jqakv/communi...](https://www.reddit.com/r/dredmorbius/comments/6jqakv/communications_advances_undermine_trust/)

~~~
TeMPOraL
Not sure if the technology itself attacks interpersonal trust - but I do see
_a lot_ work in tech being done exactly towards that. In particular, crypto
folks from the blockchain sphere do their damnest to invent ways in which you
don't have to trust anybody to do things. I do not feel it's a good idea, at
least not for basing society and its infrastructure on that.

The core of my argument would be: trust is something we're good at, as humans.
It's _efficient_ , and it's _flexible_. By flexibility I mean that it handles
new/weird situations well. Trustless solutions, on the other hand, burn a lot
of compute to replace the need to trust another person, and they do not handle
corner cases in any way.

A similar thing is bureaucracy - it exists as a way to reduce dependence on
trust, at a serious loss of inefficiency. Frankly, I'd argue that today's
bureaucracies are so inefficient that without hidden trust acting as a grease
on the wheels, they'd grind to a complete halt. They're also inflexible, and
they're usually only saved - again - through trust that leaves room for
individual bureaucrats to "cheat the system" here and there.

All in all, I feel the problem is with scale of societies we're trying to
create. When it was just 100 people in your village tribe, everyone knew
everyone else, and trust-based society _just worked_. As we grew our societies
by many orders of magnitude, we've lost the interpersonal mechanisms of
establishing and maintaining trust (which includes punishing actors for
breaking it). And it seems to me that instead of trying to find _new_
mechanisms for enabling trust at scale, we're just giving up on the whole idea
and replacing it with burning physical resources. This, I feel, is a wrong way
to go.

(I'm probably not articulating the argument well either.)

~~~
dredmorbius
Take a look at the Reddit item I linked, it's my own (uncharacteristically
brief, you're welcome) essay.

I've been digging into literate on this further, and ... there's some general
support, though the overall picture's somewhat mixed.

My starting observation was that, while the ability for two parties to
_voluntarily_ communicate might increase their trust, the ability for a party
(or parties) to _unilaterally_ surveil others, particularly at scale, is
almost certainly _corrosive_ to trust. Issues such as comprehensive systems
surveillance (user logging and monitoring on computers), cameras, microphones,
etc., come to mind.

In reading on the topic, there seems to be a great deal in sociology,
particularly Durkheim and Weber, addressing the point. I'd observed that
_every_ major empire has had a strong religious component (with the possible
exception of the Mongol Horde), and that religion didn't start breaking down
strongly until ~18th and 19th centuries, under the onslaught of _both_ science
and reason, _and_ improved communications.

JoAnne Yates and James Beniger have both written extensively on the nature of
communications technologies and practices _especially_ within commercial
contexts. Beniger's _The Control Revolution_ in particular details the
evolution of commercial and commodities practices in the Americas starting
before the American Revolution. A significant feature _especially_ prior to
the 1830s / 1840s, at which point the telegraph provided instant
communications _especially of prices_ was that movements of goods required
agents, working at a distance, and with considerable independence and
autonomy. Beniger describes the counterflows of goods and financial records,
generated at each transshippment point (each triggering a transfer of goods, a
financial transaction, and a trust relationship between originator, buyer, and
agents).

I'm not saying that the trust was always well-placed, but that _there was no
real alternative_ , other than setting up rules, perhaps some form of non-
realtime checks (multiple agents rather than one, say, such that outlier
behaviour might be observed), and a _very_ strong reliance on reputation. This
is reflected even in the language of business communications, which is based
on establishing and maintaining trust -- later streamlining of external and
internal communications in the early 20th century removed much of the ornate
19th century language and replaced it with the no-nonsense, strictly-business
correspondence of the 20th century. Email, texts, and Tweets have stripped
that further still.

Beniger _specifically_ mentions bureaucracy as a technology, by the way.

And yes, scale has a _ton_ to do with this. Another notion I'm looking at is
that networks (I'm looking at interpersonal / social, as well as
technological) directly affect information flows through size, topology, cost
functions, cohort selection, protocols, and more. (Yates appears to be
studying a fair bit of this, though I'm not sure to what extent her views are
as technical or information-theoretic as mine.)

Bureaucracies are, by the way, generally very _efficient_ , so long as they
can be subject to efficiency constraints. They _do_ get hidebound, in that a
bureaucracy is a mechanism for formalising information flows (literally
"creating forms" \-- a major class of business correspondence), and that
there's _always_ a problem between a tightly-specified limited grammar and the
Real World. (See: things programmers believe about time, time zones, names,
places, etc., etc., etc.)

There's also the matter of dealing with information _at scale_ , for which I
find powers of ten a useful-if-rough organising concept. Treat these as
ranges, with [previous bound] ~< n ~<= [stated upper bound], and yes, with
fuzzy edges.

10^0: Full focus. An item.

10^1: Reasonable scope of daily focus. A short list.

10^2: An information-gathering scope, it's possible to be at least generally
aware of each item. A long list.

10^3: Stressing the bounds of even skilled individual un-managed awareness. A
compilation of lists, a book. Pre-Gutenberg Europe had about 30,000 _total
books_ (volumes, not titles), and few libraries exceeded 1,000 total volumes.

10^4 - 10^5: Some sort of organised management system is necessary, can be
paper-based. About 300k - 1 million books are published annually
("traditional" and "self-published" respectively).

10^6 - 10^8: A formalised and electronic system is probably necessary at this
point. Statistical treatments quite helpful. The largest libraries contain ~24
million volumes.

10^9 - 10^11: Statistical treatments almost certainly necessary.

10^12+: Big data, machine-learning. Multiple levels of abstraction required to
humanly comprehend scales. A galaxy's worth of stars. Total number of cells in
the human body.

I use books, notes, and index cards considerably. The sense of information
scale (and tractability) as I range from 1 card to 10 to 100 to 1,000s is
interesting and visceral.

Now make those people and organisations and businesses and systems, and try to
work with them ....

See:

Beniger: [http://www.worldcat.org/title/control-revolution-
technologic...](http://www.worldcat.org/title/control-revolution-
technological-and-economic-origins-of-the-information-society/oclc/895626594)

Yates: [http://www.worldcat.org/title/information-technology-and-
org...](http://www.worldcat.org/title/information-technology-and-
organizational-transformation-history-rhetoric-and-practice/oclc/44669262)

------
starchild_3001
I've read the article and most of the comments here. Still not sure what
ethically questionable things we're talking about. Is serving content related
to your personal interests questionable? Or is collection of personal data for
commercial (but benign and legal) purposes questionable?

I can see that selling and wide-scale distribution of personal data can be a
problem unless explicit consent is obtained. Can someone clarify who's doing
that and for what reason?

~~~
peoplewindow
Seems like he didn't bother to specify exactly what he found so objectionable.
Given that, it's impossible to evaluate his claim that executives were
"confused" because "nobody ever asked them the questions". Maybe nobody ever
asked them because the questions he was asking were stupid or ridiculous?

After all, it may sound dismissive, but I'm reminded of a quote by Patrick
Stuart. "That's the biggest danger, you see: believing that you really are
more important than everyone else. We're not, you know. We're just actors".
Maybe if Kumail spent his life building things instead of pretending to be
someone who builds things, he'd have a different perspective on technology and
ethics.

~~~
dcow
FWIW Kumail has undergraduate degrees in computer science and philosophy. We
went to the same college. The comp-sci department and college in general has a
strong sense of ethical responsibility. In fact the computer science
department makes it a point to have students make an ethical pledge as part of
the major, optional of course. Please don't get the idea that because Kumail
has chosen a career in acting that he has no business commenting on other
industries. Even though I agree we should spend time in others' shoes before
judging, I don't think his statements here are unwarranted. Your comment feels
a little uninformed and defensive.

~~~
peoplewindow
Yes, but so what? It's very easy for people who don't actually _make_ things -
like actors and academics - people who never face _tradeoffs_ in their life,
to get up on their high horses about people who do. If Kumail wants to comment
on the ethics of the computing industry he should man up and write an essay in
which he lays out his case, and shows why his preferred tradeoffs are better
than those chosen by industry. Like anyone else would have to do, if they
wanted influence.

Instead we get this clickbaity junk based on a series of tweets, which nobody
would have seen or cared about if he wasn't a famous actor.

~~~
dcow
I don't think it's Kumail's fault how this got published. I got the impression
this was a report on comments he made not any intention of an exhaustive essay
on the state of computing. I mean I am professionally in the industry so let
me say: I agree that computing across the board lacks an ethical framework and
when I talk about this I usually am met with similar responses: apathy or
ignorance.

~~~
peoplewindow
I think you get that response because the computing industry has been
constantly attacked with very flimsy, weak and agenda-driven accusations of
unethical behaviour, for a very long time. So these sorts of accusations have
lost their power, there were too many boys crying wolf.

A large part of this was driven by the decline of the newspaper industry. At
some point Murdoch decided that Google was evil because of Google News, and
the future of news was the iPad. This from a guy who never even uses email. So
he gave some speeches and the orders went out and the Murdoch press
immediately started attacking Google with very dubious stories, alleging
unethical behaviour. I remember this inflection point quite well. For instance
the WSJ paid someone to go digging and they found that the behaviour of Safari
had changed with respect to third party cookies, in ways that weren't
following the specs, and now some old code Google used was setting cookies too
widely or something. This regression had of course not been noticed by anyone
because everything still worked. So they blew it up into a major drama and
claimed it was all an evil conspiracy, instead of a bug in Safari. Meanwhile
Apple got glowing praise and a free pass.

It wasn't just Murdoch of course. The whole industry started attacking
internet companies and it was all about money. See the "Google Tax" in
Germany, Spain etc. So the supposedly unethical behaviour they were trumping
up was very often not unethical at all, or only unethical by some totally
meaningless redefinition of the word (e.g. all advertising being considered
unethical).

So when Kumail tweets - and the _purpose_ of tweeting is to get noticed and
spread a message, so I give him no slack for that - and this gets picked up
and inflated by the media, my impression of Kumail goes from neutral to bad.
Twitter is not a reasonable place to start a debate about the ethics of
technology and if he was an engineer and not an actor, he'd know that. But
he's just an actor. So why give a shit?

------
dhoulb
I got into a conversation once that I think frames this well. We had WILDLY
different views on whether going to Mars was a good idea (like, violently
polar opposite views). The strength of our disagreement surprised me.

I was arguing that we should “race to Mars”, mainly because the value of
having a second planet dramatically increases the odds of survival for our the
human race, as a whole. Thus Mars is one of the highest importance activities
we could possibly be focusing on.

My friend was countering that all the rich people escaping to Mars made them
not care about Earth and their fellow citizens, and was just about the most
abhorant act he could think of.

He’d rather see everyone die together on Earth than a small group people live
(at least it’d be fair). I’d be happy to sacrifice 95% of the humans alive
today as long as some live somewhere (I’m ambivolent about who they are, I
presume there’s a formula to be found?).

I think my view is more that of Silicon Valley/entrepreneur/programmer types.
These people (me, etc) want the best long term outcome and will take big
actions towards that, even knowing it’ll cause some short term pain.

I don’t think ‘regular’ people think like that. They generally care more about
the people around them, their own pain, their tribes, their cities, etc. (But
not at all about trillions of as yet unborn humans.)

I think it’s easy to label Silicon Valley as “unethical monsters” who’re out
for themselves. But Assuming we’re talking about the crime of “innovation
without regard for effects” (as opposed to ACTUAL rulebreaking, like theft,
assault, fraud, which I assume the Valley is no worse for than anywhere else
in the world), the intentions of entrepreneurs aren’t evil or unethical, they
just care more about the survival of the entire race than, today’s people.

I also think a lot of hackers/builders/entrepreneurs, for all the optimism
they have about growth and innovation, are simultaneously very
realistic/pessimistic about all the various ways our race is fundamentally
screwed, and kinda recognises they need to be more powerful and have more
resources in order to do anything about that.

Startups are more about “getting us out of this fine mess were in” than money.

~~~
sebular
There's enough straw in your comment to make an army of men, and almost too
much cringe to handle.

For starters, you're putting a mountain of words in Kumail Nanjiani's mouth.
It's odd that you must be told this, but when you use quotes to summarize your
opposition's arguments, you're supposed to actually wrap them around words
that were actually used. A quick Ctrl+F on that page leaves one having to
guess at whether you're dishonest, careless, projecting your own securities,
or all at once.

Then you go and draw a line in the sand and separate yourself from "regular"
people who lack the capacity to see beyond their small and petty concerns. You
applaud Silicon Valley entrepreneurs as holy warriors fighting for humanity's
"best long term outcome" without providing a single example. You claim that
people start businesses not to make money, but to save mankind itself.

All this grandiose talk while you fervently pat yourself (and your kind) on
the back, but your only tangible anchors are imagined words and flimsy
analogies to humans colonizing Mars.

You are the exact personality that Silicon Valley team expertly mocked in the
first season.

"We're making the world a better place!"

And for the record, if you managed to get 5% of the human population on Mars,
that would be ~350 million, more than the entire population of the United
States. If we put that much energy into moving people to a planet that's
currently capable of supporting life for zero humans, you'd think we could've
built a pretty damn good defense system to knock asteroids off a collision
course with Earth.

Why is it that so many people who describe themselves as forward-thinking are
more attracted to the idea of terraforming a planet with a poisonous
atmosphere than lifting a finger to keep our little oasis in decent condition?
I'm not saying the entire human race should rise and fall on a single planet,
but don't try to paint your Mars fantasies as an altruistic plan to save all
the coarse-minded sheeple from themselves.

~~~
dcow
It never stops astonishing me how many people claim the ends justify the means
without the slightest indication of ever having considered the moral or
ethical implications of such a stance. Sacrificing 95% of the population on
earth for 5% to survive on mars also shows how fundamentally perverted the
mindset is. Yes there are ethically grey areas but it's all about the journey
because that's all there is. Anyway thanks for utterly destroying the GP so
eloquently.

