Unfortunately it's a course a lot of my classmates have expressed disdain for. To them it sounds more like a "gender-studies course" (their words) than a course for "super serious no-bullshit smart people" (my interpretation). I'm hoping the readings will have them reconsider and realize how important considering the ethical implications of their programs.
As an example: a friend and a classmate works for $bigcorp. He had an idea for an employee engagement system. The idea was that every employee had a digital avatar that follows them around on the premises. So (as an example) when they go to the coffee machine their avatar high-fives the avatar of another person who's also getting coffee. This, as you can imagine, requires tracking the movement of employees to a fairly high degree. The project got two months into development before the "right" person heard about it, raised a fuzz, and got the project cancelled.
Neither my friend nor my other classmates see the problem with the precise tracking of the every move of every employee - that scares me a lot.
What we need are not "ethics in tech" courses, but for people in tech to get more exposure to general humanities education. Require everyone getting a degree at your university - computer science majors included - to read and discuss classic literature, history, and philosophy. Exercise the parts of their brain that deal with questions whose answers can't be reduced to code. Ethics cannot be covered by a pamphlet of tech-specific rules; to act ethically, a person needs perspective on human nature and on what it means to be a member of society.
For example: Jan 23rd's reading is on Pascal and the art of persuasion:
"Pascal, the keen-minded philosopher and mathematician, fathomed the human traits of man's nature with the same accurate measurements which made him famous in the realm of geometry. Read his searching analysis of man's conceit."
Though the year's effort is not necessarily an ethics course, per se, the sum is a fantastic grounding in ethics (among many others). When I did the full year, I could not help but look up the authors and works in more detail on wikipedia and google. Delving in further, even for just another 15 minutes a day, really helped me 'suck out marrow' from the works. Especially since the guide was compiled at the end of the gilded age and before the wars and post-modern life, comparing it's view to today's is unavoidable and fascinating.
Part of 'ethics' is mulling things over a lot, and The Harvard Classics is one of the best jumping off points for that out there.
It's free, excellently curated, time tested, and quick.
These books/things taught me nothing (nil) about ethics - they are about questions that were relevant centuries ago.
On the other hand: good hard science-fiction literature sometimes is able to teach you one or two relevant things - even though I admit that ratio "text-to-read" to "important lessons learned" is rather low (i.e. you have to read immense amounts to get few important lessons). Because I love science-fiction, this is rather unimportant to me, but for people who follow my advice and now start to read science-fiction literature to learn some ethics lessons will be deeply disappointed if they have no deep love for the science-fiction genre.
Many of the "classics" of classic literature (no pun intended) talk about questions that are perpetually relevant, and the same goes for lots of things that fall under history and philosophy.
I don't debate there's a lot of garbage out there that falls under "it's old, so we'll teach it", but I can assure you that the vast majority does speak about relevant topics. And like in SF, the ones that endure are the ones that speak most clearly.
In general, all stories teach about empathy - how can I put myself in somebody else's shoes. If you found a better way to learn that via SF, more power to you. The classics are still relevant because they provide the foundation upon all this is built, and so they become important once you discuss things in a wider circle. (E.g. I might like fantasy, and not have a solid enough grounding in SF. We need to find some common ground. Via acclamation, the world has picked literature/history/philosophy as that common ground)
And so I'd encourage you to maybe go out and give some of it a try :)
If you care for recommendations: For philosophy, I recommend "What does it all mean" by Thomas Nagel. I don't have a great recommendation for History - I like "By Steppe, Desert and Ocean", but it's incomplete & somewhat eurocentric. True world history in a single book is probably asking a bit much. Same goes for literature, but even more so - it's really a matter of taste. I adore "The Master and Margarita", but beware, it's weird :) Grapes of Wrath is a moving and v. relevant one. The Illiad is spectacular, but I only know German translations, never read it in English.
> they are about questions that were relevant centuries ago
It's clear you're missing the point: ethics that are specific to a given technology, or even a specific social structure, are of limited value. If a person's moral system is going to be rich enough to apply itself to new technologies and situations that nobody has ever faced before, one needs a broad perspective on the human condition. A deeper understanding of right and wrong and the in-between.
Civilization was very different centuries ago, but people were not.
Very different civilizations imply very different ethical questions that are relevant. I just say that the global (network) interconnectness brought a huge plethora of new ethical questions that were never relevant in the past - exactly because they were science-fiction at that time. That is why I mentioned science-fiction literature as far more relevant further above in the thread.
When I read these books, I learn nothing (nil) about ethics, they are about questions that are relevant centuries from now.
More seriously though... the debate about whether a democracy should invade a country in order to ensure it's security while it's main enemy is a completely different country isn't a 21st century concern: the Athenians debated the same thing before invading Syracuse. What should the role of rich people be in a republic with vast wealth inequality and a rapidly urbanizing population during a time of increasing demagoguery and populism isn't just a concern in the 21st century US election for the office of President, it was also a concern in the 2nd century BC Roman election for the office of Tribune. We could come up with a new analysis about the trade-offs between a government of elected officials coming from the people and one consisting of specialized elites trained for the task, but Plato summarized the main arguments already in the Republic.
The list goes on. I would sum up saying there's nothing new under the sun, but someone already beat me to it.
> “Soon you’ll be ashes, or bones. A mere name, at most—and even that is just a sound, an echo. The things we want in life are empty, stale, and trivial. Dogs snarling at each other. Quarreling children—laughing and then bursting into tears a moment later. Trust, shame, justice, truth—“gone from the earth and only found in heaven.” Why are you still here? Sensory objects are shifting and unstable; our senses dim and easily deceived; the soul itself a decoction of the blood; fame in a world like this is worthless. —And so? Wait for it patiently—annihilation or metamorphosis. —And until that time comes—what? Honor and revere the gods, treat human beings as they deserve, be tolerant with others and strict with yourself. Remember, nothing belongs to you but your flesh and blood—and nothing else is under your control.”
― Marcus Aurelius, Meditations
I don't mean that to be demeaning, but just the reality of what gets covered. I did take Intro to Philosophy (Can't say I learned anything new in it), but most courses were things like French Caribbean Literature, Art History, Civics, etc.
And second, as someone else below mentioned, a lot of the more general and theoretical things you might pick up on humanities classes can be used as a lens to think about tech-related issues, even if the professor or readings haven't actually explicitly done that. (In fact, if none of the readings have done that, trying to do that might make for a good final paper --- it's something that's at least worth talking to the instructor about.)
- How social media might affect democracy
- How AI surveillance could impact human rights
- The emergence of tech companies with the resources of nation-states
You're essentially creating a course you want taught. You can go look at the descriptions of some civics courses for non-majors. I don't think you're going to find much discussion of AI surveillance. They have to cram what they can into a semester or quarter.
For instance, from a libertarian perspective I'd argue:
- Social media makes democracy stronger because people can communicate more. There's little to say beyond that.
- AI surveillance is no different to surveillance powered by people, except cheaper. It has no real impact on anything beyond scale, which may make a practical difference but makes no fundamental difference to any arguments about surveillance. A much better question would be how cryptography affects surveillance and human rights, but very few ask that except the politicians and investigators it actually affects. This is because questions in the media about "AI" are almost always standard leftist anti-capitalism targeting tech firms whilst posing as something new. AI is merely a prop to start tired old conversations, not a real interest.
- "The emergence of tech firms with the resources of nation states" being the very next bullet after "AI" is just re-inforcing my point. You aren't interested in tech ethics. You want to debate capitalism itself, which universities and academics have wasted far too much time on over the centuries. The resulting "debates" have yielded no insight but lots of violence, subterfuge and general chaos.
It's also - like the other question - based on false premises. There are no tech firms with the resources of nation states. A very few might appear that way if you compare arbitrary and incomparable statistics like GDP vs market cap, but those don't measure the same thing. As a simple example showing how false this is, tech firms can't make law or raise an army, which even the tiniest and most useless nation states can and do.
However, good luck arguing any of these points in a paper required by such a course and not getting a fail grade. Universities are the absolute last places that should be trying to teach ethics.
> It has no real impact on anything beyond scale, which may make a practical difference but makes no fundamental difference to any arguments about surveillance.
Scale makes a huge difference: The obvious example, discussing this on HN, is computers. Fundamentally, the massive network of computers making up the internet is no different from the original Zuse computer - both are just number crunching machines. But in the same way that AI surveillance will allow you to control everyone, not just a few individuals, the difference the scale makes is just so huge that the two aren't even remotely comparable.
"The USSR compiling detailed files on millions of citizens was not unethical because it was small scale, but Google compiling detailed files on billions of citizens is, because it's large scale."
"Surveilling terrorists is fine because there are very few of them, but CCTV to catch thieves in public places isn't because that's much larger scale."
These would be very odd and brittle arguments though. In fact I've never seen anyone make an argument like that. The tolerability of surveillance depends very much on context like who is doing it to whom and why, with scale being a pretty small aspect. When people debate surveillance it's always in the context of identities and purposes. But the notion that AI+surveillence is special ignores questions of who's the watcher, who's the watched and what the purpose of it all is to focus on the least important part of all: concrete cost to the watchers. After all communist regimes already demonstrated that you can scale mass surveillance up to huge levels without needing any technology at all, just ideology.
That's why I'm suggesting debates about AI in left-wing media are actually not about AI at all, that's just a way to restart the discussion about standard left-wing ideas and topics, like capitalism or sometimes identity politics. This article is an exemplar:
In fairness, it does start by recognising that "big brother" is the primary user. But the thrust of the article is about private (ab)usage. It ends by saying:
Imagine a surveillance system that falsely identifies a black man from his face and then just as falsely attributes to him aggressive intent from his expression and the way he is standing.
Except when the platform censors one side of the debate. Reminds me of The Joe Rogan podcast with Tim Pool and Twitter execs. https://www.youtube.com/watch?v=EbTXqrS9l5E
Don't get me wrong. I think Twitter's turn from "free speech makes us stronger" to censorious partisan prop is terrible, but, it could only be making people weaker if Twitter had completely replaced other forms of communication of the same scale that were less biased. I don't think that's really true. Before Twitter there were blogs, TV, radio, newspapers etc. All still exist and doing fine. Twitter and other sites have been additive.
It's similar to wishful thinking to believe vague "humanities education" would impart any ethics.
And at least in the science labs you don't have to figure out whether or not your grade depends on how well your reports parrot the teacher's own options back at them.
So yeah, you're technically right, but it's disappointing to see universities actively moving in the other direction. Of course, it still seems like university hires are woefully unprepared on day one, though I think many of us agree that 6-12 months of "real world" training for new hires is unavoidable.
My own anecdotal experience as a BA working in $BigTechCorp is that basic skills usually studied in the humanities are woefully lacking in many of my peers: writing/grammar, the ability to understand others' perspectives, and most of all ethics. The number of times I've had to say something like "uhhh, no, you can't do that, it's illegal" or explained to someone why maybe adding more telemetry to a client-side explanation may be controversial.
My university was particularly known for its well-rounded, humanities-friendly culture. But in my own experience, I only had one humanities class that I really felt like I learned things in. It was titled "Great Texts" - we read excerpts from The Iliad, Augustine's Confessions, Nicomachean Ethics, Plato's Republic, and others along the same lines - and most of the course was spent having engaging, open discussions about the readings. It had a major impact on my inner life and I wish I'd had time to take some other classes like that.
My other "humanities courses" included:
- "Political Science" - rote memorization of how the United States' government works with very little analysis or discussion
- Psychology/Sociology, which were somewhat interesting but pretty straightforwardly scientific/memorization-heavy
I took an extracurricular class in Ethics (in general), probably the most useful class I ever took.
With AP transfer, I only took 5 courses (1 psych, 2 phil, 2 econ). I found the psych to be a waste and the econ was not relevant to this humanity context. I actually found the philosophy to be really useful in this context, but discussions were sometimes a bore since just about nobody did the readings before class.
Take Mozilla's firing of Brendan Eich-- we had several massive debates here about it. One side was saying that companies should be able to expel leaders whose values don't align with the ones the company wants to advance. The other side was saying that a culture where you can be fired for advocating for the wrong political position is literal McCarthyism.
Would you expect an ethics class to do anything more than provide a forum for debates like these? Do you think anyone will actually change their view?
No, not really. That's kind of the point of ethics classes - to debate our ideas of "good" and "evil", to make people think for themselves, and to alert them to the fact that the questions exist in the first place.
The Hollywood blacklist was also maintained by companies and not the government. It really doesn't make a huge difference who's maintaining it, if enough employers / financiers ban employment / financing based on it.
Cory Doctorow refutes the idea that corporate censorship does not matter in their essay "Inaction is a form of action".
I've been on the receiving end of several ethical issues and find that it is often less about ethics and more about tribal politics.
So were the at-will laws that current exist discussed in extension to this or did that little detail just become the elephant in the room? Because political views are not a protected class, so their bad ending has already been here for a while. It strikes me as strange to treat that idea as a possible bad future when it's the actual present.
In this case, the most relevant part is whether Eich was fired. As far as can be determined externally, he was not. Depending on your leanings, you may decide to believe (perhaps based on evidence, perhaps not) that what probably happened was morally equivalent to being fired -- but again, logic and law say that doesn't matter. If the law says you can't do X when condition Y holds, but Y is kind of blurry and hard to determine precisely, then all that means exactly nothing if you didn't do X.
Feel free to debate the legality or the ethics, but please don't get them mixed up.
In first year there was a subject focused around class discussions I can remember each week we'd be given an issue we'd go and do research on the issue then we'd discuss it the next week in class. The lecturer would kind of guide us by asking leading questions but it was mostly back and fourth free discussion talking about pros and cons of issue etc. It was a relatively soft subject but the discussions in class were always lively and I found it pretty interesting the diversity of opinions that would come up often people in the classroom would have very different ideas and there would be points raised I hadn't previously considered. I can remember one discussion around implications of GM Food (which was topical at the time) sparked some really heated discussion. I thought the subject was really good at opening us up to deeper ethical considerations.
In Later Years we had a subject focused on Engineering disasters and failures things like Bhopal, Challenger, Kansas City Walkway etc. How can we learn from these events how can we apply those learning to everyday practice.
There was also a fourth year subject which included a guest lecturer from the Law Faculty which covered things like Liability, Professional Responsibility, Codes of Practice. It included discussion on Whistle-blowing and disclosure.
Software world can feel quite a bit different the same requirements around responsibility and liability don't exist which effects the culture somewhat.
We're in good company... bad company... we are not the only ones experiencing this problem.
Thats the thing with ethics, its completely subjective.
This is why they love open offices (not because it encourages “collaboration”, because it doesn’t).
I can't say whether they're right about your school's course specifically, but it's certainly something I can imagine.
> Neither my friend nor my other classmates see the problem with the precise tracking of the every move of every employee - that scares me a lot.
makes me think the criticism is probably valid. These sorts of classes or even trainings are exactly as you say, you're told "this is a problem" and you publicly say "yes it is" to avoid trouble, while privately you believe anything, yes or no or it's not a big deal or whatever. If you have reservations or qualifications you're not encouraged, and maybe actively discouraged, to bring them up for discussion.
To take the explicit example, tracking the location of every employee isn't necessarily a problem. Context matters. (Indeed tracking the location of every moving object on earth bigger than a fist isn't necessarily a problem either, and whether or not it is one should prepare for it being in our world's future as technologies like smart dust advance to counter aggressive technology proliferation.) A class on such topics should be able to discuss different contexts apart from a baseline scenario and leave students with an understanding of why other people might or might not think such-and-such is a problem beyond a juvenile "they're stupid, ugly, and evil" -- even to tolerate not having a class consensus about scenarios with many possible subtitles, qualifications, and complications, which are the only realistically useful scenarios to talk about. A great outcome for a good course is that students may disagree on "X is a problem" or "I see a problem with X" stated without further qualifications, but are able to explain how it could or could not be or be seen as a problem with qualifications.
Not that I see the point of the particular project, but I also think not every possible kind of fun should be illegal because of "privacy".
A better example would perhaps be Pokemon Go. Lots of people enjoy playing that game. The question is how does Niantic use the data, but a game like that would not automatically have to exploit the data.
The evil information that can be extracted from the location data may also be exaggerated.
Also remember, no good dead goes unpunished.
: as a minimum this would be the leadership of all countries that opposes freedom by jailing or killing members or leaders of the opposition.
: a seal team took out ISIS leader, but he still managed to blow his suicide vest and killed 3 of his kids.
My ethics course handled a number of issues. The only one I viscerally remember was Therac-25.
> that opposes freedom by jailing or killing members or leaders of the opposition.
With such a broad definition of bad guys, I think you've just listed the leaders of every country. I'm not sure that's your intent, but it if it is, it's quite a radical position.
There were lots of lessons to learn from Therac-25 but I'm not sure any of them would be classified as ethical.
No that's just a crazy fantasy.
It is unethical to pretend to "help" someone by intentionally using an ineffective solution to make yourself look good without putting in the effort.
This is basically a lesser form of hero complex/syndrome.
You see a problem, think of a solution that makes you look like a hero and everyone is worse off except you.
Celebrities visiting poor african villages is probably the most obvious occurence of this. They book an expensive flight and hotel and drive a car and when they arrive they just do minor help for a week but hey everyone sees it on my instagram account!!!
I totally get that you've gotta eat, and it would have to be a really bad job for me suggest that someone choose to go hungry or homeless. However, if you have the option of taking more than one job, I do think we're ethically obligated not to go with the awful one. You've got to be able to look at yourself in the mirror, and to get up each morning without feeling like you're making the world worse. That counts for a lot.
Socio-cultural pressure has brought us to this level of advancement civilization and morality wise.
This doesn't fall under the umbrella of "judgemental people" unless you mean to use such a wide brush to literally encompass everyone that's ever existed. "Peer pressure" is not always negative and in this case it most definitely wasn't. It was a corrective mechanism that worked very well for OP.
There was also an idea that, Google employees are the "good guys" compared to "evil Amazon", therefore when Google does something, it's de facto good, and it's better for Google to do the unethical thing since they will do it less unethically than Amazon.
Google employees care a lot about their "TC" and "GSUs" - that's it. What bugs me is that at least Goldman employees and their ilk kind of have that tacit acknowledgement that they are in it for the lucre. A lot of Googlers really act like they are in it for the greater good even when the company picks money over ethics at literally every junction.
This is a publicly traded company that explicitly states its mission as “helping the world do good”, and the executive team used the exact same argument when someone questioned them on this during an all-hands meeting. It was one of the most demoralizing moments of my career.
Imagine you were accused of doing something decades ago. Do you have an alibi for December 17, 1955? (adjust if you are younger, or imagine being accused of today's crime in 2077) You will have lost receipts, calendars, and ticket stubs. The people you were with, who could have served as witnesses, are long dead. You'll go up against a jury that hears a tragic story about some helpless child you supposedly harmed. Good luck with that.
It's especially bad that changes in statue of limitations don't just apply going forward. It really feels like ex-post-facto law, which is expressly forbidden by the United States Constitution. Cases from about 70 years back are pushing Scouting BSA into Chapter 11.
The church is simply acting to prevent further legal exposure to what has to be the largest child sex abuse scandal in history. They’re not interested in protecting the innocent. If they were, they’d be much more transparent and cooperative with authorities than they have been.
You’re generalizing 100,000+ people of all ages and backgrounds from all around the world.
I get that people on this site get a raging hard-on against tracking on the web, I suffer from the same, but to say that getting site owners to snitch on you in exchange for their sites being able to exist is some huge breach of ethics is not a rational position to take.
When one is inclined to blame someone or some company for all the troubles in the world, it is helpful to ask them about the _specific_ harm that they feel the object of their ire is causing them. If they respond with a laundry list of things, that's a tell they aren't thinking rationally about it.
This isn't always about role models and being upstanding citizens. A lot of us in this field are fortunate enough to be able to decline work based on ethical stances. Anyone can say no. Can they afford the consequences?
If the "consequence" of making ethical career choices is you earn only $250,000/year instead of $300,000/year, you should probably reflect on why you value your soul so cheaply.
That said it is possible for multiple people to be in the wrong in multiple dimensions and there is room for nuance. Robbing a bank is wrong, but doing so when under duress saying that the robber should be punished would be wrong but not if they were instead tasked to outright murder.
Putting it in such an offensive way undercuts any moral claims you might make. Maybe that's why you get such strong push-back. People don't choose to work for "evil" companies out of fear. Some just don't think about it. Some just don't care. Neither of those make their decisions right, but that's not the same as fear either. Some people have different ideas of what's right. Some people believe they can turn something that's currently negative into something more positive. Agree or disagree, it's still not about fear. "Grow a pair" isn't an argument. It's an attack, and people respond to it as such. If you wanted to make an effective argument instead of just giving yourself a dopamine hit, you might try engaging with the actual reasons people decide as they do.
I spent the first two years out of college working for a (non-FAANG) very morally bankrupt company. It was interesting work but the ethics of the product kind of weighed on me. I left to join a very-early stage startup who's mission seemed much better and I could comfortably say was at least ethically neutral, but probably well into the good side of most people's ethical spectrum (I also thought the tech was super interesting too).
A year or so in and I became exhausted there. I felt super unproductive at work because the org felt so resistant to processes and had a very team-of-one cowboy coder mentality towards most projects. The culture felt empty because almost everyone was remote and being so close to the founders made me detest what I saw as their narcissistic qualities, and a lack of product traction was demoralizing. A trip to the dentist revealed that my blood pressure was higher than normal, especially for my age.
I jumped when my previous company gave me the opportunity to pursue a career change that I badly wanted but would not have been able to get anywhere else (I really felt I had a knack for people managing and making eng teams successful, but the former employer was the only place that would be willing to take a risk on a new manager with my level of technical experience). I am so much happier now and have actually noticed a measurable decrease in morning blood pressure readings that I started taking during my previous, more-ethical job.
Do I consider myself a shitty person because of the work I do? Sure. I don't know what the right answer is though, because I'm much happier now and I struggle to optimize for anything other than myself and those I'm closest to.
I applaud Mozilla for suggesting that people should consider where they draw their personal red lines, and encouraging them not to cross it for any amount of cash.
Have you done anything to act on that preference? Did you send a resume out this week? Are you networking?
Prioritizing your/your families wellbeing over an abstract evil might make sense the day you take that job, but that circumstance must be re-evaluated regularly.
Too often folks talk themselves into thinking what they are doing today is ok because they didn’t have a choice yesterday.
If you are in your mid 30's and you are an established and experienced person in your field of work, finding a new job should be the least of your problems. If you truly cared about the ethics of your work, you would have left your current job already.
When was the last time you had an interview with another employer? When was the last time you sent out resumes? When was the last time you networked with people from other companies that you find interesting? When was the last time you reached out to someone at company X for a cup of coffee? Have you considered starting your own business?
You do not remember or you have not done it at all since you started at your current job? Well in that case, there is your problem. You might think that you actually care about ethics but your efforts of trying to find somewhere else to work says otherwise.
I get it. You have probably convinced yourself that you are not doing anything wrong or unethical so you should not feel bad about it. Besides, the job probably pays well and it keeps you and your family fed and gives your family a roof over your heads and that is the most important thing afterall.
That is fine and all but it is still a weak excuse if you truly cared about ethics in your work. Most people do not care if the paycheck is good enough.
I wonder how the career programmers who worked at these organizations and still need the paycheck will feel about a new director or architect from google who is now "giving back".
Why would you seek out a job at a company in that business if you didn't believe in that business? Unlike Google (or Facebook) it's not a large enough choice to be a "default" employment option.
Palantir bites. In fact, Palantir is the first one to bite. You make it through the interview process and get an honest-to-goodness offer, few other companies are talking to you, and of those few, they're just now doing their initial phone screen. What do you do: take the offer or see if one of those phone screens pans out? Oh, and in most states if you decline a job offer, you lose your unemployment. If you decide to gamble and wait, you're gambling that you'll get an offer from someone else before rent comes due. Are you really willing to take that risk?
Now let's throw in another twist: you've got a family. And let's even be generous here and say you've got a few other leads at or very close to the offer stage, so it's not like you're going to end up on the streets for nonpayment of rent. But all of your other leads pay so much less that your children's lives will be noticeably affected. You may not like what Palantir does... but are you willing to tell your kids that Christmas is cancelled and you have to move to a much smaller house in a different part of town thus ripping them away from all their friends at school?
Ethics is on everybody. These “kids” are grown adults.
If you look for excuse, you'll always find one.
So far I haven't heard of anyone who was forced to code the next behavioural analytics platform because they got nothing to eat otherwise.
Some of it might be because of information compression of things that are less relevant or taken for granted like "Don't work for the Mob, Blackwater, or the NSA." are essentially a given."
One could make the argument that advertising at and subsequently locking someone into the walled-garden ecosystem is unethical because you are artificially inflating the consumers' opportunity costs. Some would look at that and call it a milder form of price-gouging.
It's a really weak argument compared to literal human rights abuses, but we're talking about ethics here, which is necessarily broad.
But if he had to take a job as a slave trader because there was nothing else available and he needed the money to survive...who are we to judge him in this hypothetical world where slavery is once again legal?
People who say they would just starve to death instead of do something immoral don't know what it's like to actually starve.
But I agree with the statement most people would do unethical things if their jobs required it. And I think that's sad to say.
I fervently hope that I would as well. Unfortunately I think most of us, including me, have a breaking point at which hunger and desperation would win over principles.
I've met normally good-natured people that, once they became a lawyer/salesmen/financial "advisor", they started doing utterly despicable, wicked behavior and justifying it as "I'm only doing my job". And then they would go on to church, or whatever, all the while thinking of themselves as "good people". Even more interesting, they would rally against "evil corporate behavior" as if they had no role in it.
We're capable of such amazing reasoning and creativity, and moral courage. But the very same minds can sustain sets of beliefs that are clearly (to others) internally contradictory or irrational, or that rationalize behavior that's clearly violates one's own ideals.
Maslow’s hierarchy holds true. Morality is a luxury for many.
There's nothing worse to me, than the "Champagne Socialist" who hand-waves about the righteousness of socialism, but fails to tip the uber driver. I hate that.
I'm with Taleb in that it's best to judge people when they have "skin in the game". No, I've turned down real-world jobs or gigs or deals where deception was involved and was not in the client's best interests. Sadly, "tactics" and dirty tricks are extremely common in the real world, particularly in sales and law area (sometimes tech). You would be amazed at the number of people that would use dirty tricks if:
- "money can be made"
- "I can get away with it"
- "my co-workers are doing it"
- (or) "my boss told me to"
However, yeah, I have to agree that it is difficult to judge people for decisions they make while staring death in the face when I'm sitting comfortably at a desk.
I hate it when people say "I'm only doing my job" when asked if they're doing something unethical. You can always take another job.
3.) Quality of life (work/life balance, time spent commuting)
4.) Ethical issues
5.) How Fun/interesting the job is
I imagine that most young people have the same outlook as me.
I think having great tooling / infrastructure at least gives stakeholders more options in terms of the business direction they're taking. You can pivot and execute faster, which to me means you can pivot away from something ethically bad and execute in a different direction faster.
Great tooling / infrastructure in my mind is also ethically salvageable and redeemable. A great tool can help an ethically positive division of the company as it can help an ethically negative division of the company. It may not always be black and white.
Lastly, great tooling / infrastructure generally requires top talent, which can move anywhere and is sensitive to things like ethics. Having great tooling / infrastructure, or the threat of losing great tooling / infrastructure by losing talent to ethical issues, can act as pressure for management to choose certain projects over others. I think Grasshopper is an example of one such decision.
1) Quality of life 2) How fun/interesting the job is 3) Ethical issues 4) Money 5) Moneyeh!
(It did from 2004-2014, and then probably has from 2017-current)
Absolutely not, if the sample is large enough to be considered representative how can that data be ethically-biased? Raw data has no ethical implication whatsoever, human emotion must be considered when assessing ethical value. The existence of a second representative sample which produces a different ethical implication is proof that the algorithm would not be ethically biased.
>it seems a bit pedantic to differentiate between data bias and algorithm bias when both need to be so closely coupled together to produce any meaningful output at all
On the contrary, it seems disingenuous to argue that an algorithm, a set of computational steps, is ethically-biased when a counter example (in the form of a data set) exists. This is akin to blaming the process of eyesight for an observation of a ethically-repugnant phenomenon. It is not the process of seeing that causes observation of ethically negative phenomena, one can just as easily observe ethically positive phenomena through the process of eyesight. We do not need to have invented glasses that filter out ethically negative phenomena to prove that the use of ones eyes is a ethically-unbiased process, the fact that it's possible to observe both ethically positive and ethically negative phenomena is enough proof.
Bias could be introduced by seeking negative phenomena and avoiding positive phenomena, but that doesn't imply anything about eyesight itself - it remains an unbiased process. If we are going to come to any meaningful conclusion about how to change the results of unbiased algorithms (insert photons into pupil, decode electric signals into conceptual models, etc,) we need to stay as logically consistent with our conclusions as possible. Observations should inform conclusions, not the other way around.
Your predecessors have left you enough life challenges as it is to be pontificating and proselytizing to you about morals.
While I don't like google, what if I worked at Google but on a team that was doing good (like Angular or 'Project Zero')
I think Mozilla is trying to remind techies to think about ethics for this exact reason: a lot of us have this weird mindset where, unless we can define something with mathematical precision, then it doesn't exist and we don't have to worry about it. This is an incredibly unhealthy way to think about ethics, and it's one of the reasons why our field is sliding towards a bad reputation.
The GDPR does a decent job in this regard around privacy and data use. Can we write a similar document to GDPR around other ethical concerns?
I think it's fine to try to raise awareness of this issue simply as something that younger folk/people starting in their career should consider, if they haven't already. But yes, if they do consider it, also try to get as much information as they can before making decisions. Also I don't agree with pointing fingers at certain companies simply because that's a popular thing to do, you need a lot more data to back up statements such as "big company X is evil", it may take months or longer to go through all that data and get some objective answer to the question. Otherwise we're simply doing gut/feeling based selection, there's nothing inherently wrong about that but we should admit it to ourselves then.
What's most important is finding out the core business models your company is aligned with, and who they work for. For example, if you thought Google was a fundamentally evil company (I'm not saying you or I think that, just an example) then by working on Angular you'd be helping to legitimize Google as a business that does a lot of good things. This will ultimately make it much harder to build a popular movement around stopping the evil Google.
You can see this play out in real time right now as Palantir and ICE try to rebrand as companies/agencies that sell/use tools which combat child trafficking. This may be true, but they also terrorize immigrant communities and must be stopped.
I grew up in a vulnerable community. It was called poverty. I am happy to say nowadays I am no longer poor.
Not everyone was born with a silver spoon like I assume many at Mozilla's leadership were, or live in the Bay Area where job choices are plentiful.
- There is no universal truth (like "killing is bad"). Whatever "bad thing" people are concerned that tech companies are doing might not actually be bad in context. Perhaps there are multiple contexts, and possibly unknown states, creating a multitude of potential ethical states for a single "thing". This may conflict with the very reason they're getting interested in ethics, creating a dilemma (and possible paradox).
- Ethics is uncomfortable, because you find yourself wrestling with questions like whether the "greater good" is more important than "individual good", or who should have rights, and what kind, and why.
- You find that what are often generally-held truths are in fact not true when taken under study with evidence, so you have to decide whether you will accept a generally-held truth or a truth backed by evidence, and when there is no evidence, what to do.
- It could be that Mozilla is just trying to influence people into not taking certain jobs at specific companies so they can achieve whatever their own organization's goals are. If so, what does that say of Mozilla's ethics?
- You may not find any answers, and still have to decide what to do with a dilemma.
Mozilla can try going after future employees, but good luck. People, both end-users and employees, have shown, by-in-large, that they do not greatly care about these issues.
Maybe they should practice what they preach
Fix the housing crisis in highly inflated markets by creating more supply (i.e. rescinding exclusionary zoning bylaws) and job candidates will be able to afford the more "ethical" choices.
The destructive greed of corporations can only be solved by organized political action.
If we suppose that we cannot solve the problem with individual ethical choices, and taking part in political action is indeed an individual choice (let's assume ethical), then organized political action as an aggregate of individual ethical choices cannot solve the problem.
Separately from that, I'm skeptical that a majority-ruled political system that largely runs on "donations" from unethical companies will be moved by an ethical minority.
To address your side point - there are examples of well organized ethical minorities achieving impressive political goals. Suffragettes, Freedom Riders.
Probably the most ethical thing you can do is to make as much money as you reasonably can and then donate most of it to the best bang-for-the-buck cause you can.
On one hand, it’s good to see ex FB employees speaking out on privacy issues, but it’s also somewhat hollow because many of them made millions on the very technology they’re now speaking out against. Honestly, how much weight does Sean Parker’s criticism of FB really carry? He’s not organizing a movement against FB, he’s doing one interview and then going back to his jet-setting lifestyle.
Ethics will never be a first-class citizen in the minds of all technologists as long as the money worship continues.
Really? I don't think many people outside the valley believed that about FB ever, and I don't think many people outside FB believe that now.
FB doesn't seem all that good at spin in general...   
: How Facebook’s crisis PR firm triggered a PR crisis https://www.theverge.com/2018/11/17/18099065/facebook-define...
: Facebook's PR Crisis Is a Mess of Its Own Making https://www.bloomberg.com/opinion/articles/2018-03-21/facebo...
: What Did Zuckerberg Know? Lessons from Facebook’s Latest Scandal https://www.prnewsonline.com/lessons-Facebook-data-crisis
The main stream media had some very glowing reviews of facebook during the 'arab spring'. Glowing articles were written about how these wonderful social media platforms help organize people to start revolutions like the arab spring.
Easy to talk, but hard to deny all that money
My guess would be because it isn't actually very profitable, and even if they start out with the best of intentions they allow themselves to be acquired and trade ethics for money anyway.
Maybe I'm just really cynical about people in tech, but I feel like all this talk about ethics only happens when you're on the wrong side of it and most people here would be plenty willing to sell out if given half a chance.
Do those companies overall do more good than bad?
Can your influence/work in those companies result in more positive impact than working for a small company? (simply because of more resources and reach of the bigger company) Here it may depend on your job level/influence in the company, obviously being 1 in 100k rank and file may not have any impact on company policy but as a more senior employee that get access to potentially more resources and larger impact decisions than working at a smaller but clearly more positively ethical company.
I guess basic question is, what is the purpose that you are trying to achieve by being ethically aware in job selection and how is it best to achieve that goal?
Don't get me wrong, I think it's great for people to consider ethics for deciding where to work but we should also try to not fool ourselves, in the end the reason to do this is to feel better about ourselves (and there's nothing wrong with that!) because an objective look at the potential positive impact we could have on the world at large may yield surprising answers about what the career choice should be.
This question is never explicitly brought up and I think it ought to be, because in the silence a tacit assumption that it is the ethics of those pleading which will be adopted. When people harangue me to vote, I will light up and ask, "What if I voted for the exact opposite candidate you want?" So for some people I suggested Trump, for others Clinton. The silence becomes uncomfortable.
It isn't "consider ethical issues" or "vote," it's "adopt the ethics I have," and "vote the way I will."
Not admitting this and being explicit about it is what makes me doubt the ethics of those pushing the considerations in the first place.
In terms of ethics, the papers by Batya Friedman and Peter Kahn were incredibly influential to me and many of which I would consider required reading for anyone interested I studying ethics and computing.
Usually that sort of thing comes coupled with "you are either with us or against us" later when that ethical bundle is in some phase of ascendancy.
On the other hand, taking a package whole seems to be less work, intellectually, than "be mindful, examine your choices and their impacts, cui bono" and so on.
Nowadays, when I see someone campaigning for "ethics" in tech, I just assume that this person is just engaging in general issue activism, close my eyes against the distraction, and get back to work. It's sad that we've reached this point.
In truth, the best way to stop unethical behavior is to make a law against it --- that way the process of rule-making is fair and transparent. The democratic process doesn't guarantee legitimacy, but it's better than anything else. The democratic process should decide the rules under which a society operates, and we shouldn't let a few self-appointed moral guardians misrepresent their personal preferences as universal rules and set the bounds of whole societies.
One, if you don’t take the job someone else will.
Two, if your country acts ethically, will other countries?
So, the only real solution would be to establish an international framework to define what is ethical and not and have some sticks behind that for enforcement. Otherwise it can end up as a self inflicted wound.
I get it, ethics isn't a "hard skill" like programming or math, but can we move forward with problem solving with a confirmed axiom that ethics, and to an extension philosophy, should be a part of basic education again?
What about lawyers that defend violent criminals?
Rich words indeed from the corporation that more than doubled executive pay and then found themselves needing to lay off 70 people to save costs on a restructuring.
Mr Gotcha is a boring person. Don't be Mr Gotcha.
Just because we're all living in the status quo doesn't mean we can't change because we're all hypocrites for living with the status quo. Aspire to something better. Even if you're currently benefitting from or struggling against the status quo.
We should be taking ethical jobs and who cares if it's Mozilla who said it. They're right about this, even if they're wrong about other things.
People far better placed than Mozilla the company have been saying this for years so they are late to that particular party anyway and plenty of the damage done they did themselves.
Personally, my take is the opposite: if you are an ethical person find the very worst company that you are still willing to work for and try to change them from the inside. After all if only non-ethical people want to work at certain companies that will make things worse, not better.
That won't work.
You do shape your enviroment, but your enviroment also shapes you. Meaning if you go to a miserable place on purpose, that place will just drain you, if you cannot change it completely.
And changing a company on your own as a lonely employee ... is doomed from the start and will lead to burn out and depression very soon.
I propose the opposite: ethical people gather together and create organisations and companies to show that things can be done better.
Talk and criticism is cheap. Actually doing something better, is much harder. But you need likeminded people around you. If they are all dispersed everyone fighting on their own against a behemot .. will not make things better. All those cold "evil" corporate people? Well, at some point they were humans with ideals as well (in most cases) but got changed by their surroundings.
See also Zvi's recent series on what he terms "Immoral Mazes": https://thezvi.wordpress.com/2020/01/16/how-escape-from-immo...
His claim is that power in large organizations is a zero-sum game, and merely competing in this game in order to advance necessarily alters your personality and lifestyle. This is due to the demands that the game places on you. The whole series is worth reading.
Generally, “implies a falsehood” is considered to be equivalent to “is false”.
What’s your point?
Changing a company from the inside is something that can only be done by those that are on the inside, hence my qualification that you should only join those companies that you can still stomach and then push as hard as you can in the right direction, rather than to join the very worst organization that you can find in the world, and one that you - hopefully - can not stomach.
Also, there may be a risk of people being more likely to derive in invalid ways when attempting the reductio, like making some hyperbolic statements, exaggerating the implications of a statement, and treating those exaggerations as if they actually logically follow.
But this doesn’t make reductio an invalid technique, just one that people can fail to use properly.)
Like I said in the other comment, the objection should be that the derivation of the absurd from the premise is invalid. For example, pointing to the “that you can stomach” part of your claim, and saying that the derivation was invalid in neglecting that part of the statement.
Talking about changing ISIS from the inside as the IT administrator doesn't convince anybody that it's impossible to change an organization from within. It's a poor argument that might win Internet points for being a spicy hot take, especially on Twitter, but it does little to debunk the idea that, eg, an engineer could ethically join Facebook with the aims of elevating the company up, through working on the team to detect Russian trolling rings/Anti-vaxxers/MLM marketers.
Perhaps their derivation of the absurd conclusion from the claim <X> was based on a misunderstanding of the claim <X>, or perhaps their derivation of the absurd conclusion from it was based on some fallacy, or some false assumption.
But reductio is a valid logical step, so long as we have the validity of the argument that the absurd conclusion follows from the premise.
In this specific case, X is:
"Personally, my take is the opposite: if you are an ethical person find the very worst company that you are still willing to work for and try to change them from the inside"
with the response, Y: "join ISIS' IT department"
What is the refutation of this particular "X implies Y" you would have used instead? Is there one that doesn't require refining X and instead attacks the Y? X is already verbose, and adding additional verbiage does not help the case (IMO)
What is the refutation of this particular "X implies Y" you would have used instead? Is there one that doesn't require refining X and instead attacks the Y? X is already
Otherwise, you're depriving an ethical organization of your contributions while aiding the competition.
Unless you're entering a leadership role it's very unlikely you'll be changing a damn thing from within, while being corrupted by the surely lavish compensation an unethical company generally can afford to shower on its individual contributors.
Now this is the kind of motivated reasoning I can get behind. I'm not lazy, I'm genuinely heroic!
And when there is no such party?
I'm not convinced by this argument. The status quo may exist because there are fundamental issues with changing it. Those issues don't go away with wishful thinking. Attempting to change things while praiseworthy may end up do more harm than good. At the best, if it doesn't end up changing the status quo, you just wasted energy, and bothered other people. But much more likely seems to me that we are trading some pros/cons for others, it's not a strict improvement or even an overall improvement.
If one tries to analyze attempts at change that failed there are lots of common patterns. One of them is being ignorant of the fundamental reasons why the issue you are trying to change is the way it is. Being ignorant of the reason behind that makes it much more likely you will fail at your task.
Can you imagine yourself arguing this in feudalism or the USSR?
One wouldn’t succeed in changing this.
Of course, some conceivable changes to the way things are, are both possible to achieve, and are overall beneficial to make.
Some changes are impossible (or effectively impossible) to make, and some are possible but have costs which outweigh the benefits.
I suppose the question is determining, or estimating, which is which.
(Though, I suppose the question isn’t just “what proportion are you right about as to what category they fall under” but rather, what is the average benefit and cost (and I suppose also the variance and other moments?) of your pursuit of the changes you choose to persue.)
You should read about Perestroyka.
"Hey, you're asking something unrealistic; people are under the same constraints that lead you to do what you advised against. How can you expect us to operate under constraints even you can't live up to? Isn't that a prerequisite to giving such advice?"
With that said, I don't think executive pay with layoffs is a good example of unethical behavior. A better one would be their behavior in the Looking Glass/Mr. Robot extension  or the forced updates via a mislabeled feature to cover their own screwup .
If a non-profit is so careless with security to promote their marketing and analytics, why would you expect others to heed less a stronger siren's call?
No, it is a logical fallacy.
Appealing to "accepted norms" is another case of appealing to authority – except a less verifiable authority, in that case. Ironically, I am currently listing fallacies, which itself does not a good argument make – so I will add to it that that's not what “informal” means; the adjective is merely specifying that it is not a logical flaw, but instead falls into a broader category (such as implying stronger evidence than exists, or something).
Your original argument is flawed, however; it neglects that organisations cannot increase the talent available to ethical companies and decrease the talent available to unethical ones by participating in the job market, and invokes an argument similar to the tu quoque fallacy by implying that Mozilla's actions have any bearing on how good its advice is.
Merely calling out that you used a fallacy (and misidentified at that!) is not a sufficient counterargument alone.
Yes, "informal" isn't "logical", hence the italics specifying as much.
No, my response wasn't a counterargument because what I was responding to wasn't an argument to begin with. There was nothing to defend because nothing of substance was produced.
No, my original argument is not flawed because your counterargument has nothing to do with the concept we've been discussing in this sub-thread.
Allow me to summarize: Mozilla sucks and yes, students should consider ethics in their job search which means they should probably not work for Mozilla.
My personal hypocrisy doesn't make air travel any better for the planet.
1) Mozilla isn't right or wrong in this case, they just lack credibility on the matter which, again, warrants the type of response that the original and many other commentators here had.
2) Your analogy isn't quite 1:1 with this situation. If you were a railroad tycoon that flew daily to perform business and made a public statement about how awful flying on planes is and how everyone should stop in lieu of train travel, it would be hard to take you seriously and you would be rightfully called out. Mozilla likes to identify itself with the ethical side yet it is no better than the companies it is criticizing, going as far as being partially funded by one of them.
Edit: Sorry if I come across as belaboring, I hadn't read your other post. :)
Mozilla makes the argument that young people should consider the ethical issues before taking jobs.
HN user matheweis asserts that Mozilla's actions or past claims are inconsistent with their argument.
Therefore, the reasoning goes, the argument that young people should consider the ethical issues before taking jobs is false.
It should be incredibly clear how this line of reasoning is fallacious and therefore invalid.
This sequence strikes me as non sequitur and strawman. That is how "the reasoning goes" when one actually commits tu quoque, but merely pointing out hypocrisy is not tu quoque.
Edit: It's possible that I have not read all of his comments. I'm only aware of this:
"Mozilla wants young techies to consider ethics in their jobs?
Rich words indeed from the corporation that more than doubled executive pay and then found themselves needing to lay off 70 people to save costs on a restructuring"
If someone's gonna downvote me I'd appreciate some color on your choice too, thanks. ;-)
If executive pay was rationally based, that might be true. But it appears to rather be the case that executive pay is really high just because executives get a lot of control over how much they are payed. More likely it would result in someone being promoted to executive and business continuing as usual.
The poaching companies are going to run out of places to put executives long before Mozilla runs out of competent employees who could run the company. In the meantime Mozilla gains a great reputation as a place to work if you want to get poached to a million dollar salary job, so it gains access to lots more competent engineers.
Or at least I think that's the sane argument for why Mozilla executives (and most companies executives) should be payed less. I'm not totally convinced that all the premises are true.
Maybe most people do, but there're also those who are driven by things other than money, and would stay.
Why does it appear to you that way? Research suggests that CEOs are overwhelmingly important and have significant impact on the valuation of a company on a scale significantly larger compared their compensation.
"For example, open-minded and curious CEOs create an entrepreneurial culture, driven and intense CEOs create a culture of results and achievement, and altruistic CEOs create a culture of empathy and cooperation. Second, as a 20-year review from 1993 to 2012 showed, CEOs’ judgment affects key strategic and managerial processes, such as staffing, financing, and marketing decisions. Third, CEOs’ reputations — their public personas — affect company valuations and stock prices.
It has been estimated that 22% of the variability in firm performance can be attributed directly to the CEO. In fact, even when researchers use transcripts of interviews and meeting conversations to infer a CEO’s psychological profile, their ability, strategic thinking skills, and communication skills predict a substantial amount of variability in company performance."
I'd suggest you evaluate whether you base your opinion around CEO pay on data evaluating CEO performance or whether it's just an emotional response because 70 workers being laid off just elicits more sympathy than a CEO receiving higher compensation.
- CEOs having a large impact - we agree here
- It is hard to replace the current CEO with a new one who will be, on average, equal (or better) than the existing one.
It's the second statement which I question. In a few rare cases (e.g. Elon Musk) there is good evidence to support it, but in the case of Mozilla I don't see it.
The research you cite goes both ways. If we can predict who will be a good CEO through an interview CEOs are easier to replace.
Most people on HN are speculating that many (not all!) engineers would be perfectly qualified for the job for instance. Market rates for this sort of person is a few hundred thousand. not a few million.
PS. I believe the relevant organization here is a non profit with 20M in revenue, not the company with 500M, but I could be wrong I don't care that much about the specific example.
Also, please provide some evidence that the board chair of a non-profit with an annual budget of $20M has a market rate salary of $2M/yr. Note that Mozilla Foundation also has an executive director (more like $250k/yr) and the Mozilla Corporation had a separate CEO and executive team (whose salaries aren't reported in the Mozilla Foundation form 990).
But hey, someone with more knowledge of Mozilla than you or I should really be the ones defending this. Maybe some are reading this thread and don't mind chiming in?
If you're a company in desperate need of a turn around, you're not going to have a lot of luck in finding a great CEO unless you offer a competitive compensation package and a guaranteed payout.
That's why you see failing companies' CEOs walk away with big packages. They won't get a good CEO unless they offer comp comparable to other companies they could work for.
Unfortunately, Mozilla seems to be focused on everything except for making Firefox better than Chrome.
This feels like a misleading question to ask. There are plenty of disputes about "market rates" for both groups, so the premise isn't one I'd agree to. Market rates are decided by these decisions, not the other way around.
If Mozilla was able to get adequate leadership at the rates that didnt grow faster than rank-and-file wages, that would have been a more ethical decision overall. If more companies did the same, market rates would reflect that. Unless you think there was no such leadership available at that rate, in which case we simply disagree.
Either remove the donation website, or find execs who are driven not by money and market rates, but by changing and improving society.
I think the latter types of execs, would work better, for making FF popular and increasing revenue. The idea that only execs that need lots of money (market rate) or they'd jump to another company, can do a good job, seems weird to me.
I've understood that in theory the Donate-to-Mozilla thing doesn't donate to the execs — but in practice, I'd say yes it does (to some extent at least).
> which would likely lead to more than 70 employees losing their jobs?
I think (guess) the opposite. That they could all have kept their job and Mozilla could have hired more.
I have no problem with that. Let them leave.
Suppose we set a maximum compensation cap of $10M/year in the US. So executives can no longer earn more than $10M/year in total compensation (salary, bonus, golden parachute, etc.).
So a bunch of overpaid execs leave the US. Where exactly are they going to go where they'll 1) be paid more than $10M/year, and 2) actually be able to do these super-high-paying executive jobs effectively (i.e., they need to know the local language, etc.)?
I would be more concerned if people who picked fruit were being poached with higher pay. We have no one willing to do those jobs.
Whatever else I might say about Mozilla as an organization, the people who are part of it have their hearts in the right place. The ethical messaging is sincere.
I actually passed on an offer from Palantir when I went to work for Mozilla in 2010 (I left in 2015) and I never once doubted I made the right choice. With those two options I'd absolutely do it again, Mozilla management issues notwithstanding.
We can certainly disagree on what should and shouldn't be restructured itself.
Above is a quote from the guide released by Mozilla.
Some readers might be confused by the Mozilla organization when trying to reconcile its public statements with its own corporate behaviour.
Given the concerns some folks apparently have about Google, e.g., privacy, AI, harassment, etc., according to the Mozilla guide, consider this hypothetical.
What if a company's staff were concerned about their employer having a major contract with Google. What should they do?
Should they walk out? What if their salaries are paid indirectly by Google, i.e., almost all of their employer's revenue comes from the Google contract?
Mozilla Corporation derives almost all of its revenue from a contract with Google. Executive salaries are funded from Google payments, with lesser payments from Baidu and Yandex for the Chinese and Russian markets, respectively.
According to one report, without the deals it makes with search engines, Mozilla could not survive for very long.
Mozilla Foundation purports to be fighting for user choice and privacy. At the same time Mozilla is automatically sending web search queries to Google, by default, without requesting user consent, in exchange for hundreds of millions of dollars in royalties.
Why is this the NLRB's business? As I recall, the controversy at Google was because some guy published a manifesto on the company's internal communications systems, and this caused a lot of arguments on those internal chat boards. I don't really see why Google doesn't have a right to police their internal employee communication systems and keep them from devolving into political arguments. Their systems, their rules.
This wasn't about a few employees having a conversation at the water cooler; this was about internally-used social media systems that everyone in the company could see. What employees say in small-group or private conversation is one thing, but I don't see how Google is obligated to give employees a platform to spout their political views in front of the entire company. Most other companies don't do any such thing.
When they settled with the NRLB, Google said in their press releases that the agreement made no mention of politics.
The terms of the settlement required Google to clarify its policies on employees exchanging information related to compensation and organising, not its position on employees expressing political views.
Also, arguably it's not mutually exclusive to suggest people to consider where to work, at the same time executives make larger income. On the outside, clearly the board of directors thought the increased income was worth it. As we have limited insight, I don't think we should assume an unethical intent.
Finally, I don't know which 70 jobs were cut, but presumably it was that their skills were determined to no longer necessary to the business. Tough luck, but to the remaining employees, it's ethical to lay them off.
Mozilla Foundation is a non-profit, and Mozilla Corp is private held wholly by Mozilla Foundation. No one is getting compensated in forms of equity. All compensation plans are salary and cash bonuses.
The data from the tweet matches the numbers from public IRS filings of Mozilla Foundation. See https://www.mozilla.org/en-US/foundation/annualreport/2018/ for the most recent report.