
Social Finance Faces Sexual Harassment Claims - strangeloops85
https://www.nytimes.com/2017/08/11/business/social-finance-sofi-sexual-harassment-lawsuit.html
======
ringaroundthetx
Wouldn't it be helpful to look at these in isolation?

Corporations routinely have these disagreements and dissatisfactions
regardless of the sector.

"Another Silicon Valley Startup" versus just an article about SoFi?

Implying a pandemic of sexual harassment would seemingly indict corporate
America as a whole, and it would be disingenuous to go sector by sector if the
intention was to reveal the news.

~~~
cmiles74
Looking at these in isolation might give some the mistaken impression that
these instances of workplace sexism and harassment are isolated incidents.
When viewed as a whole, it's clearly a problem for software engineering (as
people in the field have been claiming for years). In fact, I'd be willing to
bet that we'll soon see comments full of anecdotes from people who have never
seen sexism or this kind of harassment in their workplaces claiming that there
is no problem. Or implying that this is somehow not all bad.

For sure this is an issue in many fields, but software engineering seems to be
the only one in which people proudly proclaim there is no problem, despite the
increasing number of reports to the contrary.

~~~
spaceseaman
> I'd be willing to bet that we'll soon see comments full of anecdotes from
> people who have never seen sexism or this kind of harassment in their
> workplaces claiming that there is no problem.

I think this happens because people generally believe that a anecdote
confirming the null hypothesis serves as evidence for that hypothesis. It's
one of the best examples of something that scientists, engineers, and so-
called "logical" thinkers screw up on a day-to-day basis.

In other words, suppose I claim that some days a subset of the population can
see the earth's sky as green, rather than blue. My null hypothesis in this
case is that the sky is blue and always will be for all people at all times.
Finding several anecdotes from people who have seen the sky blue can never
confirm my null hypothesis. They may have simply never seen the sky green.

In statistics, we can sort of work around this problem with p-tests or
confidence intervals. In interactions with normal humans, I like to take the
personal approach of always putting a little more weight in anecdotes that
push against the null hypothesis. Just because they're contrarian, it doesn't
mean their argument is more valid, but when you encounter something "outside
of the norm" it's worth evaluating whether your sense of "normal" is more
biased than you might think.

~~~
ThrustVectoring
Erm, what? Encountering normalcy is evidence for things being normal. It might
be weak evidence - after all, the "sometimes some people see the sky green"
also predicts a lot of sky-looks-blue. But it's still evidence, you can't just
throw it out because you've labeled it the "null hypothesis".

~~~
spaceseaman
I'm not saying you throw it out. Read more carefully, don't put words in my
mouth please.

> Finding several anecdotes from people who have seen the sky blue can never
> confirm my null hypothesis

were my words exactly. May I ask what issue you take with this wording? In
your counter, you misinterpreted this and claim that I throw evidence out. My
actual claim is such anecdotes actually do not serve as well as evidence.

As you said, you cannot throw out such evidence, and I agree. You seem to have
misunderstood what I meant.

More importantly, you then fall prey to the exact logical fallacy I'm
discussing while trying to counter me:

> Encountering normalcy is evidence for things being normal

False. False. False. False. This is legit the encapsulation of what makes this
a fallacy. Consider the following thought experiment:

Suppose I give you a black box, and I ask you to prove to me that this machine
works perfectly. You sit down and watch the machine, and you can verify that
it is working perfectly right now. It is now logically impossible to prove the
machine works perfectly at all times, however, because you lack any evidence
the machine does not operate normally. You could watch the machine all day,
and all night, for eternity, till the stars die in the sky. At any point while
watching it, would you claim that you now have evidence the machine always
operates normally? How do you know I haven't programmed it to break the
instant you turn away from it? Or the instant you stop observing it? There's
no way to know for sure...so do you have any evidence it's performing
perfectly?

Anecdotal evidence is very similar. Many anecdotes re-affirming that all is
working as planned don't tell me anything because that is the "null
hypothesis". I take issue with your wording "because you labeled it the null
hypothesis". I didn't label one or the other maliciously or even
intentionally. The definition of the null hypothesis is (in my experience, I'm
not a mathematician by trade) the scenario which is the "default" or "normal"
or "expected". This fits both the scenario I gave and the analogy I make.

Again, I'm not saying I'm "throwing out" such evidence. In actual human
interactions, we cannot swing the pendulum entirely the other direction and
simply ignore and throw any evidence that confirms the "normal" either. In
such a case, we would be making the exact same logical error except in
reverse. I'm just trying to point to a general guideline based upon statistics
and our usage of the null hypothesis within it.

Mathematical ideas do not map perfectly to worldly ones, but they serve as
useful tools and models. In this case, I use my model to remind myself to
consider anecdotes that are "abnormal" or "beyond the norm" as more important
evidence since these "abnormal" events are the very things I want to be able
to recognize.

A more rough "human" example might be: Say you give your cancer test results
to five random doctors, and 4 of them say you're okay as expected, but one
says you have cancer. My point is that the fifth doctor's evidence is more
"important" (in a human, not mathematical way) to me than the other four. He
could have found something they all missed.

~~~
ThrustVectoring
I think we actually agree - if competing theories both expect a lot of
"normal" to happen, you make much smaller updates on "normal" than on
"abnormal". What I take issue with in particular is:

>I think this happens because people generally believe that a anecdote
confirming the null hypothesis serves as evidence for that hypothesis.

It is evidence. It's not much evidence, and it's evidence you should expect
from "unusual things sometimes happen"-type hypotheses, but it is evidence.

Maybe the phrasing I want is something like "People generally believe that
anecdotes consistent with multiple theories help distinguish them". Dunno.
Probably a lot of this is just Bayesian vs Frequentist worldviews talking past
each other.

>It is now logically impossible to prove the machine works perfectly at all
times

It's logically impossible to prove anything at 100% certainty. The best you
can do is gather more observations and make incremental updates each time. I
can give you what my belief is that my next observation will say that
everything is fine (if I don't know anything else, it's Laplace's Rule of
Succession - if I observe it 100 times at it all works, my confidence level
for the next time is 101/102).

------
atomical
Are these types of cases based on contingency?

