why does a person's interest in the application of their work serve as a signal for if you should hire them? I mean maybe for someone in a product role, but how is it relevant to hiring an individual contributor?
not hiring someone just because of their internal philosophies feels like gate keeping to me.
if someone is a cynic and realizes most start ups arent out there "making the world a better place"... doesn't really have any bearing on their potential output.
It is gate keeping. That's the point. I don't want to work with people who feel like it's reasonable to be unaware or agnostic to the effect their work is going to have on actual people. I don't believe in the meme that someone's role ought to dictate if they need to consider the consequences of their creative efforts on other human beings.
I have a lot more respect for people who consider these things, and just have different opinions than I about what they consider worthy applications, than those who just consider it unnecessary to think about these things. We have an obligation, if we are going to call ourselves "engineers", to consider what we are working on from an ethical perspective.
I think there is a bit of goal most moving happening here:
- you started with ad systems as example of evil: they patently aren’t. They are more of a result of the deeper cause - folks don’t want to pay for things if possible. So now the bill gets moved to a different table, that’s all. All the humanitarian efforts (if any) are standing on the shoulders of the money generated from ads
- if someone says ‘I just want to solve hard problems’ - it is quite a leap from there to assuming they don’t care about social problems. May be they don’t feel empowered/qualified to tackle the big social questions and are just trying to make a living and possibly be productive doing so. Or they don’t want to tackle a social conversation in a workplace setting.
I am very wary of the forcing that’s happening of making everyone involved in social/philosophical questions whether they like it or not. A lot of people just want to make it through the day/build expertise in something and make it through their life. They’d prefer to pay taxes and let other entities / experts deal with those. This doesn’t mean apathy, it just means a lack of time and ability. I think that’s worth respecting.
You're being silly. The guy is explaining his perspective. He's explaining what he believes and why he believes it. He's not writing a thesis or constructing some logical argument. This isn't a debate. Applying the term "goal post moving" to this makes absolutely no sense.
I just feel like you're taking a confrontational approach rather than just trying to understand his position. Nothing he says is inherently contradictory.
Lol isn’t it odd you consider the defense confrontational while the op started with calling a bunch of folks morally challenged?
Fwiw - I don’t work on ad systems. I was just stating my opinion about how borderline ethical considerations from misuse are pervading engineering and science today. What about intent?
I didn’t say anyone was morally challenged. I said there are a lot of people I’ve encountered in my career that are ethically apathetic. I highlighted ad systems (not all ad systems, just some) as the kind of thing I personally consider toxic and where I have encountered people who check themselves out from caring about the ethical dilemmas involved in developing such systems, focusing instead on the fun puzzles involved.
My point isn’t that I won’t hire people who worked on such things, my point is I won’t hire people who are completely disinterested in the ethics of what they are doing. I’m not imagining this, I have worked with many people like this in my several-decades long career. Beyond the ethics, this is just good business, since people plowing ahead on things while being blind to ethics is how people get harmed and lawsuits get filed.
This isn’t a revolutionary concept in other engineering fields: you can lose your license if you violate certain codes of ethics, either maliciously or due to ignorance or apathy towards following them.
I never said all ad systems are evil, yet you are saying no ad systems are evil.
I never said that if someone doesn't care about the purpose of their work, they don't care about social problems.
If you're going to turn this into a debate, at least try not tearing down strawmen.
The point of my post wasn't to make strong claims about ad systems being universally evil. It's just like, my opinion man, that some are. The point was to state that I do not want to work with people who, knowingly, do their work in an ethical vacuum, focused entirely on the technical problems at hand.
No you didn’t call them evil: you just called them
>Very, very toxic things for society, like human behavior modification (ad) systems
You didn’t say those points about people’s intents, you just said you won’t hire them / won’t work with them.
Sorry for paraphrasing. My argument stands.
Yes you’re allowed to have whatever opinions you want to hold. But here you’re proclaiming it in a public space where it can definitely be construed as judgmental.
Finally you call my arguments as fighting a straw man and yet you construct one yourself: ‘folks who work in an ethical vacuum’. My whole point is that’s probably a very minuscule amount of folks and something you are refining as a true Scotsman from your previous generic statements. My whole response is around how most folks do consider it but file it under fair use expectations and move on - so it is not a fair opinion. That’s all.
You sound like you were offended by my characterization of ad systems and extrapolated a ton of imaginary arguments from there you’re attacking. I’m not sure who you are arguing with, but it sure isn’t me.
Its ineluctable. If you are an engineer, your work has a moral and ethical axis that inseparable from the rest. This is what our professional societies believe, it is what you are taught in school, it is in many ways no more than taking responsibility for your actions.
What you are describing is apathy. You don't get to stand apart from the work that you do because it is hard.
- morality and ethics are a gradient and are fluidly getting defined as we evolve. Are you still immoral or apathetic if you use electricity generated from coal? Or are you saying we are all apathetic but this is the one instance you want to stake your argument on?
- almost all systems get misused over time: are all those makers apathetic? What about the intent of the hustlers using such systems?
Holy Christ, Are you seriously asking why ethics and concern for how the systems you design interact with end users and targets of those systems might be a worthy consideration?
Let me give you a concrete example:
Imagine you are an software engineer tasked with working on a facial recognition system to help police identify known criminals to help find suspects near the time and location of a crime. It observes nearby people and assigns a probability to them being a known criminal. Police department demands 80% accuracy for the product.
You design such a system using some blackbox facial recognition AI, and you get the following results:
Overall 78% accuracy with:
6.5% False Positive rate
31% False negative rate
Not too bad, you tweak some things, hit your 80% accuracy without messing with the false positives too badly, and you meet the specification provided by the client. Mission accomplished and you're ready to ship right? Makes the company money? No problems?
Cool. Except, because you didn't really care that much about how the technology you deployed would be used or the ethics surrounding its use, you failed to consider the right performance targets despite what your client asked for and your system is nearly 100% racist.
What happened?
You trained on equal numbers of prison mugshots, and mugshot like photos of people with no criminal records. You failed to consider that black people are over represented in the US prison system. (38% of prisoners but 13% of US population) Your classifier just learned to label someone a likely criminal if they were black and essentially no other criteria.
Yet, the actual likelihood the people identified by the system as "criminals" in fact have a criminal history is at most somewhere ~33% despite the fact your system labels it as 80% likely. Worse, even if we have a hypothetical situation where blacks and non-blacks are represented in their average proportions, there's a near equal number of black and non-black people with criminal histories in the vicinity of the crime! Worse still, since people tend to be more segregated than that, when blacks are in even more of a minority there will be more non-blacks with criminal histories around. When blacks make up a greater proportion, the likelihood of being falsely accused goes up even more.
And FYI... such systems with similar flaws have actually been built and deployed in the past. How do you think that plays on trust in the company and the technology in general in the long run? Considering end-use ethics brings value.
It's a very troubling example, but most of it is focused on product failures. Isn't that a bit orthogonal to ethics? Maybe there's some traction between how hard something is to 'get right' and if it should be attempted, but it sure doesn't seem black and white.
Warren Buffet has a great quote on this topic. He says he hires on three criteria: Intelligence, energy, and character. He adds, "Those first two will kill you if you don't have the last one. If someone's immoral you want them to be dumb and lazy".
Being a high performer is not a positive when someone's looking to take advantage of you.
not hiring someone just because of their internal philosophies feels like gate keeping to me.
if someone is a cynic and realizes most start ups arent out there "making the world a better place"... doesn't really have any bearing on their potential output.