
Are you a 'cultural fit' for your job? Machines can now tell - hhs
https://www.bbc.com/worklife/article/20200227-are-you-a-cultural-fit-for-your-job-machines-can-now-tell
======
bo1024
If you asked a researcher on fairness and discrimination in machine learning,
"what's an application where ML could go horribly wrong", they would describe
exactly this kind of example.

I don't know a thing about this company, maybe they're doing great work. But
here are the alarm bells.

\- Application domain is extremely difficult even for humans who get in-person
interviews with the subject. Claiming to solve a task with ML that humans
cannot reliably solve is a symptom of snake-oil.

\- Compounding this, feedback is very limited and noisy. It's hard to tell if
your algorithm is doing a good or bad job. So it's very hard to train the
algorithm to do better.

\- Data is likely to be low-quality (because humans can't even label this task
well). Much worse, data is likely to reflect human biases, conscious and
unconscious both. Racist or sexist data lead to racist or sexist algorithms.

\- Algorithms in the space are highly susceptible to learn stereotypes (even
from apparently-unbiased data). We can expect tons of noise in most features
of an applicant, while more reliable signal would come from very generic
features like what college someone attended or other factors closely tied to
race or socioeconomic background.

\- Feedback loops can be a problem if the algorithm is fed data that is
produced by the algorithm. I.e. the algorithm mostly recommends men (let's
say) and is fed positive feedback, it will discriminate even more against
women and so on.

On a more personal note, it's not clear the premise of the task makes sense. A
diverse team will have a greater mix of perspectives, which may make it more
effective even if its 'culture fit' score is lower.

~~~
333c
It's not clear from your comment whether or not you're familiar with the book
Weapons of Math Destruction by Cathy O'Neil, but you echo many of its main
arguments about these types of systems. I can recommend the book if you
haven't read it.

~~~
bo1024
I should have cited it; great read on this stuff and was ahead of the curve!

------
wlesieutre
_> Instead of confronting racial and socioeconomic biases in our hiring
process, we've decided to deflect blame to a consulting firm who will in turn
blame it on a mysterious 'cultural fit' algorithm_

Seriously though, article does a bang up job ignoring all of the potential
problems with systems like this

~~~
allovernow
> Instead of confronting racial and socioeconomic biases in our hiring
> process, we've decided to deflect blame to a consulting firm who will in
> turn blame it on a mysterious 'cultural fit' algorithm

Culture encompasses a range of human behavior. Much of that, though it may not
be tasteful to say, is tied to workplace performance. Pressure and desire to
achieve, collective/individualist tendency, ethics and morals, conformity and
deference to authority; if one is willing to acknowledge that cultures exist
and vary, then one cannot deny that there will be strong correlations between
culture and fit for particular roles. And as it happens culture correlates
with nationality, race, and socioeconomic status - because parents and
communities generally pass culture on to children.

At some point in the near future neural nets trained to predict human
performance will undoubtedly condition on priors like nationality, race, and
socioeconomic status, even indirectly if they are not explicit data points.
What then? Will we continue to bend over backwards to deny reality in favor of
a false Utopia? What happens when these trends suggest different interventions
for medical conditions? Different learning environments? Different reactions
to authority and punishment?

Edit: I don't understand the downvotes. Why use data analysis tools if you're
just going to ignore the results you don't like? How do you expect to solve
problems like inequality if you aren't willing to explore their actual causes?
Is there something illogical or factually incorrect in my comment?

~~~
mreome
Assuming that any individual can be judged, or their actions predicted, from
generalizations based on their culture or socioeconomic status is quite
literally the definition of racism. Using correlations with race and
socioeconomic status as a filter by witch to deny people the chance to
achieve, advanced, and change those correlations/generalizations, is the
definition of systemic racism. If you act based on those kind of
generalizations or correlations in a way that enforces them, you're not
identifying a problem, you're creating/perpetuating it.

~~~
Reelin
I agree with the sentiment you express, but I think there is some nuance of
definition that should be clarified here.

Deployed on a wide scale, the filters you describe would indeed constitute
systemic racism. However, on an individual basis such practices would be more
correctly termed profiling rather than racism since they lack intent. (This
depends entirely on your definition of racism, of course, which it turns out
can be quite difficult to pin down. [1])

I realize this may seem like splitting hairs since I ultimately agree with
your overall point, but often such definitional issues result in a great deal
of miscommunication and misunderstanding.

[1] [https://slatestarcodex.com/2017/06/21/against-
murderism](https://slatestarcodex.com/2017/06/21/against-murderism)

~~~
wtvanhest
It also depends on your definition of intent. If you knowingly implement a
system with flaws that make it systematically racist, you have intended to be
racist.

~~~
ghettoimp
Hrmn. This makes it sound like you can only implement a systematically racist
system if you are overtly/intentionally trying to be racist.

I suspect in many cases, folks who implement racist systems simply do not care
whether their system is racist or not. They want to optimize/automate whatever
it is that they are about, and if the result is unfair, well, that's just
someone else's problem.

~~~
wtvanhest
That is why I said ‘knowingly’

------
jkingsbery
There might be some self-fulfilling prophecy going on here though...

"Congratulations! Your willingness to have all the complicated things that
make up who you are summed up in a numeric score indicates that you'd be a
perfect fit with our culture!"

A little more seriously...

> By assessing language in this way, Srivastava and other proponents of LIWC
> analysis say they can tell whether someone fits in naturally within a group
> – but also whether they adapt well over time when the group’s dynamics
> change. This ability to be plastic, to adapt, is often what organisations
> should really be looking for, says Srivastava.

This just says that we can use algorithms to reinforce the traits that already
exist on a team. That's a different question to whether hiring the candidate
will help the company become more inline with what they want to be. It also
seems like a great way to encourage ideological, gender, and cultural
homogeneity.

~~~
Barrin92
Not only does it pretty much by definition create a monoculture, but it also
runs into what in economics is called the Lucas critique.

Trying to make policy decisions based on highly aggregated historical data
without understanding foundations fails because the policy change has an
effect on behaviour, summed up by the idiom 'a signal that is being exploited
too long ceases to be a useful signal'.

There was already the hilarious case of people putting words like "Oxford",
and "Cambridge" in their applications in white font on a white background to
trick some ML system that scanned applications for these signals.

I think we need to teach people who think they can solve every problem with
some ML system a lesson in economics because this stuff is getting out of
hand.

~~~
stonedartist
>I think we need to teach people who think they can solve every problem with
some ML system a lesson in economics because this stuff is getting out of
hand.

AI/ML has become one of the most hyped buzzwords lately. People are trying to
find any and every area possible to push some kind of ML system into the
project. Smart toothbrush that uses "AI" to predict the buildup of plaques.
Smart shoes that uses AI to do some niche task... and so on.

------
quadrifoliate
From the article:

> Take personal pronouns, for instance – do they signal team awareness by
> referring to work that “we” are doing – or do they rely on “I” and “me” a
> lot?

Oh great, now we have to use these weasel words to describe our work in all
our emails to our company so that we won't get pegged as non team players by
our benevolent evaluation quiz.

Seriously, the main problem here seems to be that management is trying to
reduce the cost of evaluating people as, well, people. If you are in any sort
of a people management position (I'm not), they are trying to replace you with
these ridiculous algorithms, and you should oppose their use in hiring out of
self-interest, if not because they actually don't work.

~~~
droithomme
I think it's even worse than that.

"I did this and that."

Translation: "I assert that I myself did these things."

"We did this and that."

Translation: "Some guys that work at my company did it."

The assumption that _we_ is better than _I_ is poor.

------
tombert
So a glorified facebook quiz is going to tell me if I’ll be a good fit?

I’d say that the likelihood that employers use this to justify racism or
sexism (consciously or otherwise) is nearly 100%.

In fairness, the article does address the potentially of monoculture, but I
feel like this isn’t going to work out for employers in the way that they
hope; in small doses and properly managed, a toxic but talented person _can_
be good for a team.

One of the most talented engineers I’ve worked with for the stereotypical mold
of the “hyper-productive douchebag”. He definitely brought the team down
somewhat, but I also learned a ton of useful things from him that I probably
otherwise wouldn’t have, and the same can be said for the rest of the team I
was on. This guy was eventually fired because of his toxicity, and while I
understand why my manager did it, I can’t help but feel that the team lost
something that day.

These systems probably do a decent job ensuring that you don’t hire jerks, but
I don’t know that that is always going to work out in the company’s favor.

------
l0b0
You mean "machines can now give you an answer more confident than any human
would dare, with additional uncertainty information you'll be unable to grasp,
and with a basis in solid pseudoscience.

------
elicash
> For instance, the successful applicant might have to get up early to start a
> shift on time or deal with a higher workload when the weather is bad. How do
> they feel about that and how would they go about it? [...]

> A scoring system is agreed with each client in advance, so that the
> algorithm can determine how well a candidate has answered a given question.
> The tool might also deduct a few points should the candidate respond very
> slowly to questions, for example.

I don't think this is a sophisticated tool. And that's even without
questioning the premise that cultural fit as it's generally practiced is a
good thing (far from clear).

------
badrabbit
This is associative generalization at its best, also known as prejudice. Not
all prejudice is bad, it is bad when you apply it against persons and
irreapective of success rate,if even one individual causes the prejudiced test
to fail, the test becomes an instrument of injustice.

We all think prejudice in the context of race or sex,but it can be applied in
just about any context.

For example: You can be prejudiced when performing facial recognition and you
may have some failure rate which can be tolerable for things like
authentication or the police finding a suspect. But let's say this is used to
perform an airstrike/assasination of a terrorist, is any level of failure rate
acceptable? It shouldn't be,not on it's own. You should have independent
corroboration and subjective confirmation. The 0.01% false positive times
7billion faces means a lot of people whose potential death you are tolerating.

This is similar to why lie detector is inadmissible in court.even with dna
evidence you need to independently prove either motive or plausibility of the
crime being committed by the suspect.

This is far from assasinations and criminal cases but the concept of "you're
unethical if you tolerate any form of systemic injustice against even one
person"

To me this sort of ML matching should never be used in hiring decisions but it
can be used to corroborate manager/peer evaluations/sentiment after a
probationary period.

In reality this means people have to lie/fake personality quizzes and
verbage/vocabulary resulting in the best fakers outperforming the flawed yet
better non-fakers. Very sad how history repeats.

------
jfktkrk0
Humans made a machine print a result based on analytical models and a specific
set of inputs created by academics on paper.

We’ve been doing that for years. What a shock we keep doing it.

The machine isn’t telling us squat except that after adding/subtracting in
this way we found this value. Even if those models are vetted by academics,
academia is a politically manipulated shit hole. Nothing to do with ML is
anything close to objective consciousness, decoupled from human ideas.

Humans can still blow up the machine and the political system that pushes such
things on them.

When we can’t even tell if we’re guessing right or just close enough to
trigger emotional familiarity to get a reaction out of someone who thinks the
machine runs on magic, well... sorry y’all but absolutely none of this means
anything except as confirmation bias to keep behaving as we are.

The only model we know of capable of creating a consciousness is the universe.
All that energy/matter had to flow over eons, at a scale far beyond our
conscious comprehension, to end up with us.

Thinking we are engineering anything close to consciousness is a conscious
trick we’re buying into to justify working for big tech corp.

------
mdorazio
This is not nearly as bad as I thought it would be. A consultancy is basically
using survey-type screening questions to bucket candidates, and another team
seems to be doing some very basic text analysis to gauge team-type thinking. I
was afraid this was going to be another ML debacle to "objectively" filter out
people who don't fit the exact demographic mold the company already has.

That said, approaches like this tend to just lead to more gaming on both
sides. Once survey questions become popular, the "right" answers get figured
out, posted in how-to guides, and the screening quickly becomes worthless. To
me, this says that hiring still sucks for everyone involved.

------
blackrock
Honestly, at some point, we have to regulate this bullshit.

This concept that these private companies feel they can use whatever means
they can, to selectively discriminate against people, for whatever they want,
has gone too far.

These large tech companies are failing in their social contract. Their purpose
is to serve the people, the citizens that makes their success possible, but
instead, they feel it is their right to do whatever they want, just because
they claim to be a private company.

If these tech companies don’t self regulate themselves, then we need to force
their hand, and vote in politicians that will force a policy change down their
throats.

------
jotm
I guess people who can adapt to different workplace cultures are SOL
¯\\_(ツ)_/¯

------
shiado
Seems like a good time to get diagnosed with a personality disorder or some
mental condition which legally qualifies as a disability and start making
millions from discrimination suits.

------
nobleach
I'm guessing I just upload a Spotify playlist.

This is a worrying trend. The "culture" fit is a maddening moving target.
Culture is grown, it evolves. If we keep "selecting" a homogeneous "culture"
(if we self-select) then we shouldn't expect very create results out of the
team.

------
justlexi93
There's a big concern here of algorithmic discrimination. How do you stop a
black box algorithm from descriminating based on gender, class, age, or sexual
orientation? Even if such discrimintion is inadvertant, it's a dangerous way
of screening applicants.

~~~
trhway
>How do you stop a black box algorithm from descriminating based on gender,
class, age, or sexual orientation?

you DeepDream the algorithm to check whether it is dreaming about a young
straight white or asian male CS major with high GPA from top university.

Anyway, it is pretty cool that the next article on the same page is "What do
we look for in a ‘good’ robot colleague?" Peak cultural fit.

------
ErikAugust
Oh, you use this to determine if I am a cultural fit, eh?

Well, I am not a cultural fit then.

------
electriclove
Psycho-Pass? [https://en.m.wikipedia.org/wiki/Psycho-
Pass](https://en.m.wikipedia.org/wiki/Psycho-Pass)

------
quadrifoliate
> Take personal pronouns, for instance – do they signal team awareness by
> referring to work that “we” are doing – or do they rely on “I” and “me” a
> lot?

------
tus88
Does culture relate to national and race related culture?

~~~
hunter2_
Culture has many definitions, and most of them do involve generalizing a
people. In the context of workplace culture, I believe it's explicitly not
those things, but rather Merriam Webster's definition 1b:

"the set of shared attitudes, values, goals, and practices that characterizes
an institution or organization"

[https://www.merriam-webster.com/dictionary/culture](https://www.merriam-
webster.com/dictionary/culture)

------
qbaqbaqba
F* you machines.

------
droithomme
Machines eh.

------
unexaminedlife
I'm not 100% convinced that "cultural fit" at its core isn't just another way
of saying:

"angsty college grads who haven't fully learned how to accept themselves as
individuals think talking to {{other demographic|old people, microsoft
engineer, perl programmer, etc...}} is gross".

Or something along those lines (feel free to fill in your favorite flavor
above).

In my experience there is almost zero NEED for my co-workers to resemble
anything I like or enjoy. I can't say that I would PREFER this, but if my team
consisted of a bunch of racist / bigoted purple martians, as long as our work
days were 100% focused on our work, and my co-workers were all capable,
dedicated, and good communicators, then what's the problem?

~~~
unexaminedlife
It seems this post wasn't that well received, so let me just go ahead and
mention as a follow-up that I was once an "angsty college grad" myself. There
was a time I contemplated and almost convinced myself that there might be
something to this "cultural fit" thing. But then I experienced the work force
for an extended period only to realize my preconceived notions of what it
takes to be successful in business has almost nothing to do with personal
traits. In probably every way I've ever thought about / experienced it,
successful businesses are built by people whose workplace behaviors are
composed of UNIVERSAL traits.

So, it follows that, for someone to say you're not a "cultural fit" more or
less means you don't have what it takes to truly be successful no matter who
hires you.

