Hacker News new | past | comments | ask | show | jobs | submit login

A common thing I've run into is people working on very, very toxic things for society, like human behavior modification (ad) systems, who get up every morning excited and enthusiastic about it because the technical challenges keep them interested. I generally avoid hiring people like this, who often will state openly they don't care very much about the application of their work, but "just want to solve hard problems."



I agree. I think it partly comes down to the extent to which they are apathetic about this though vs being actively aware of the wrong they feel they are doing. If somebody works on something that they personally believe is toxic and bad for society, I would be concerned about that person's level of alienation and what else that might mean about them as a worker and a person I have to work with. However, I'd be interested to know how many people are truly in this category. Are these engineers, enthusiasticly solving hard problems on some nefarious product, genuinely against what they're doing? Maybe most of them are simply apathetic about that part of it. That's not great either of course and it seems like an immature and selfish attitude at the very least. Of course, I would caveat this by saying that we don't all have the luxury of finding companies that align with our values and its probably a "first world problem". I'm very much aiming this at skilled software engineers, especially those in the major cities of wealthy countries, who can probably pick and choose a fair bit.

Personally, I don't want to work with people who don't care about the end goal for practical reasons as much as anything else. For the sorts of companies I've worked for at least, they make sub-standard contributions. I work on a very product-centric engineering team in a complex domain where not knowing the domain well will seriously hamper your ability to plan, refine and execute on features and bug fixes. Sure, we have a product manager but we still need that deep knowledge and you're probably only going to obtain that by being into the product and fairly engaged with it and the company's mission.


So true. I recently found out an old acquaintance is now using his phd to do facial recognition for facebook. OMFG, not for all the money in the world....

Conversely I had another friend who switched out of his phd when he discovered every red cent of funding was somehow coming from the race to build killer robots. Props to you Henrik, if you ever read this!


> very, very toxic things for society, like human behavior modification (ad) systems

I'm curious... since all advertising has the goal of modifying human behavior by definition (from the state of not buying your product/service, to buying it), would you consider all advertising to be toxic by your criterion?

And if not, where do you draw the line exactly?


I don't draw any lines. That's generally not necessary. However, there are some systems that are clearly on one side of the line where I would consider it unethical to work on them.

The point isn't that I expect others to agree with me on this, but simply that I expect them to think about this and form opinions about it.


This seems like a kind of simple 'gotcha' question, right? Drawing an imaginary line and asking some one pick a point where it magically changes, when it doesn't work like that.

Some advertising is done as product discovery. You make something good, you want people to know about it. Some advertising is done to convince you to purchase regardless of its value proposition using psychological tricks. Or, since they were talking more generally about manipulating human behaviors, we can include 'dark patterns'.


The gotcha question goes back a long time. See "Loki's Wager".


Arguably (see the book “Disciplined Minds”) this outcome is one of the functional aims of STEM higher education. The exam structure selects for and thus identifies students willing to focus on technical problems divorced from ethical context.


why does a person's interest in the application of their work serve as a signal for if you should hire them? I mean maybe for someone in a product role, but how is it relevant to hiring an individual contributor?

not hiring someone just because of their internal philosophies feels like gate keeping to me.

if someone is a cynic and realizes most start ups arent out there "making the world a better place"... doesn't really have any bearing on their potential output.


It is gate keeping. That's the point. I don't want to work with people who feel like it's reasonable to be unaware or agnostic to the effect their work is going to have on actual people. I don't believe in the meme that someone's role ought to dictate if they need to consider the consequences of their creative efforts on other human beings.

I have a lot more respect for people who consider these things, and just have different opinions than I about what they consider worthy applications, than those who just consider it unnecessary to think about these things. We have an obligation, if we are going to call ourselves "engineers", to consider what we are working on from an ethical perspective.


I think there is a bit of goal most moving happening here:

- you started with ad systems as example of evil: they patently aren’t. They are more of a result of the deeper cause - folks don’t want to pay for things if possible. So now the bill gets moved to a different table, that’s all. All the humanitarian efforts (if any) are standing on the shoulders of the money generated from ads

- if someone says ‘I just want to solve hard problems’ - it is quite a leap from there to assuming they don’t care about social problems. May be they don’t feel empowered/qualified to tackle the big social questions and are just trying to make a living and possibly be productive doing so. Or they don’t want to tackle a social conversation in a workplace setting.

I am very wary of the forcing that’s happening of making everyone involved in social/philosophical questions whether they like it or not. A lot of people just want to make it through the day/build expertise in something and make it through their life. They’d prefer to pay taxes and let other entities / experts deal with those. This doesn’t mean apathy, it just means a lack of time and ability. I think that’s worth respecting.


> goal post moving

You're being silly. The guy is explaining his perspective. He's explaining what he believes and why he believes it. He's not writing a thesis or constructing some logical argument. This isn't a debate. Applying the term "goal post moving" to this makes absolutely no sense.

I just feel like you're taking a confrontational approach rather than just trying to understand his position. Nothing he says is inherently contradictory.


Lol isn’t it odd you consider the defense confrontational while the op started with calling a bunch of folks morally challenged?

Fwiw - I don’t work on ad systems. I was just stating my opinion about how borderline ethical considerations from misuse are pervading engineering and science today. What about intent?


I didn’t say anyone was morally challenged. I said there are a lot of people I’ve encountered in my career that are ethically apathetic. I highlighted ad systems (not all ad systems, just some) as the kind of thing I personally consider toxic and where I have encountered people who check themselves out from caring about the ethical dilemmas involved in developing such systems, focusing instead on the fun puzzles involved.

My point isn’t that I won’t hire people who worked on such things, my point is I won’t hire people who are completely disinterested in the ethics of what they are doing. I’m not imagining this, I have worked with many people like this in my several-decades long career. Beyond the ethics, this is just good business, since people plowing ahead on things while being blind to ethics is how people get harmed and lawsuits get filed.

This isn’t a revolutionary concept in other engineering fields: you can lose your license if you violate certain codes of ethics, either maliciously or due to ignorance or apathy towards following them.


I never said all ad systems are evil, yet you are saying no ad systems are evil.

I never said that if someone doesn't care about the purpose of their work, they don't care about social problems.

If you're going to turn this into a debate, at least try not tearing down strawmen.

The point of my post wasn't to make strong claims about ad systems being universally evil. It's just like, my opinion man, that some are. The point was to state that I do not want to work with people who, knowingly, do their work in an ethical vacuum, focused entirely on the technical problems at hand.


No you didn’t call them evil: you just called them

>Very, very toxic things for society, like human behavior modification (ad) systems

You didn’t say those points about people’s intents, you just said you won’t hire them / won’t work with them.

Sorry for paraphrasing. My argument stands.

Yes you’re allowed to have whatever opinions you want to hold. But here you’re proclaiming it in a public space where it can definitely be construed as judgmental.

Finally you call my arguments as fighting a straw man and yet you construct one yourself: ‘folks who work in an ethical vacuum’. My whole point is that’s probably a very minuscule amount of folks and something you are refining as a true Scotsman from your previous generic statements. My whole response is around how most folks do consider it but file it under fair use expectations and move on - so it is not a fair opinion. That’s all.


You aren’t making any sense.

You sound like you were offended by my characterization of ad systems and extrapolated a ton of imaginary arguments from there you’re attacking. I’m not sure who you are arguing with, but it sure isn’t me.


Its ineluctable. If you are an engineer, your work has a moral and ethical axis that inseparable from the rest. This is what our professional societies believe, it is what you are taught in school, it is in many ways no more than taking responsibility for your actions.

What you are describing is apathy. You don't get to stand apart from the work that you do because it is hard.


I see two issues with that way of thinking:

- morality and ethics are a gradient and are fluidly getting defined as we evolve. Are you still immoral or apathetic if you use electricity generated from coal? Or are you saying we are all apathetic but this is the one instance you want to stake your argument on?

- almost all systems get misused over time: are all those makers apathetic? What about the intent of the hustlers using such systems?


Great, you've successfully diluted the statement with questions that are adjacent, if you squint.

Engineering work has an ethical element to it. I do not see how what you are your 'just asking questions' intersects with this.


As someone founding an early-stage startup, I really appreciate hearing your thoughts on this and it's very encouraging to know I'm not alone.


Holy Christ, Are you seriously asking why ethics and concern for how the systems you design interact with end users and targets of those systems might be a worthy consideration?

Let me give you a concrete example:

Imagine you are an software engineer tasked with working on a facial recognition system to help police identify known criminals to help find suspects near the time and location of a crime. It observes nearby people and assigns a probability to them being a known criminal. Police department demands 80% accuracy for the product.

You design such a system using some blackbox facial recognition AI, and you get the following results:

Overall 78% accuracy with:

6.5% False Positive rate 31% False negative rate

Not too bad, you tweak some things, hit your 80% accuracy without messing with the false positives too badly, and you meet the specification provided by the client. Mission accomplished and you're ready to ship right? Makes the company money? No problems?

Cool. Except, because you didn't really care that much about how the technology you deployed would be used or the ethics surrounding its use, you failed to consider the right performance targets despite what your client asked for and your system is nearly 100% racist.

What happened?

You trained on equal numbers of prison mugshots, and mugshot like photos of people with no criminal records. You failed to consider that black people are over represented in the US prison system. (38% of prisoners but 13% of US population) Your classifier just learned to label someone a likely criminal if they were black and essentially no other criteria.

Yet, the actual likelihood the people identified by the system as "criminals" in fact have a criminal history is at most somewhere ~33% despite the fact your system labels it as 80% likely. Worse, even if we have a hypothetical situation where blacks and non-blacks are represented in their average proportions, there's a near equal number of black and non-black people with criminal histories in the vicinity of the crime! Worse still, since people tend to be more segregated than that, when blacks are in even more of a minority there will be more non-blacks with criminal histories around. When blacks make up a greater proportion, the likelihood of being falsely accused goes up even more.

And FYI... such systems with similar flaws have actually been built and deployed in the past. How do you think that plays on trust in the company and the technology in general in the long run? Considering end-use ethics brings value.


It's a very troubling example, but most of it is focused on product failures. Isn't that a bit orthogonal to ethics? Maybe there's some traction between how hard something is to 'get right' and if it should be attempted, but it sure doesn't seem black and white.


Warren Buffet has a great quote on this topic. He says he hires on three criteria: Intelligence, energy, and character. He adds, "Those first two will kill you if you don't have the last one. If someone's immoral you want them to be dumb and lazy".

Being a high performer is not a positive when someone's looking to take advantage of you.


What if they disagree with you on whether it's bad for society?


Not a relevant concern - what's relevant is if they have an opinion.

I'll work with almost anyone who can form an argument that shows they thought hard on the ethics of what they worked on, even if I fundamentally disagree with their conclusions. The people I don't want to work with are the people who when I ask them this, say: "you know, I never really thought about it that much. I kind of just leave that to the people in charge, and focus on trying to do the best work I can." There are a lot of people like this, but they're not a cultural fit for teams I create.


But is it really so, or do you expect the thinking to go in a particular direction? Let's say they are libertarian-adjacent, and think that e.g. ad manipulation is morally neutral. The first stage of that is not giving a crap. I was like that. On separate occasions, I actually did spent some time thinking about morality of certain things, and my opinions moved EVEN MORE towards not giving a crap - I found that some things I found instinctively icky and unacceptable, like sex work, pushing opioids, or placing people below the API (didn't work on any of those... yet?), are actually mostly neutral, and I have no reason to condemn them like I used to. Some things I still find icky, but now I wonder if some might be similar thoughtless bugs in my brain.

Is the mere act of thinking enough for you like this, or does it have to go the right way? :)


the cognitive dissonance is strong in this one




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: