
A three-page paper that shook philosophy, with lessons for software engineers - jsomers
http://jsomers.net/blog/gettiers
======
bunderbunder
> A philosopher might say that these aren’t bona fide Gettier cases. True
> gettiers are rare. But it’s still a useful idea...

At least as presented, I see the idea being used to do more harm than good.

Take the first example, with the form not autofocusing. We're already not in a
Gettier case, because the author didn't have JTB. The belief that he caused
the bug obviously wasn't true. But it wasn't justified, either. The fact that
he had rebased before committing means that he knew that there were more
changes than just what he was working on between the last known state and the
one in which he observed the defect. So all he had was a belief - an
unjustified, untrue belief.

I realize this may sound like an unnecessarily pedantic and harsh criticism,
but I think it's actually fairly important in practice. If you frame this as a
Gettier problem, you're sort of implying that there's not much you can do to
avoid these sorts of snafus, because philosophy. At which point you're on a
track toward the ultimate conclusion the author was implying, that you just
have to rely on instinct to steer clear of these situations. If you frame it
as a failure to isolate the source of the bug before trying to fix it, then
there's one simple thing you can do: take a moment to _find and understand_
the bug rather than just making assumptions and trying to debug by guess and
check.

tl; dr: Never send philosophy when a forehead slap will do.

~~~
mdorazio
How is it not a JTB?

Belief: The pull request broke the search field auto focus.

Truth: The pull request _did_ break it. There was an additional reason beyond
the pull request unknown to the author, but that's not important to the Truth
portion here.

Justified: This is the only one you can really debate on, just as philosophers
have for a long time. Was he justified in his belief that he broke autofocus?
I think so based on the original JTB definition since there is clear evidence
that the pull request directly led to the break rather than some other event.

I think that when claiming it's not a JTB you're choosing to focus on the
underlying (hidden) issue(s) rather than what the author was focusing on,
which is kind of the whole point of Gettier's original cases. For example Case
I's whole point is that facts unknown to Smith invalidate his JTB. In this
programming example, facts unknown to the author (that someone else introduced
a bug in their framework update) invalidate his JTB as well.

~~~
bunderbunder
His real belief was not exactly that the PR broke it, it's that the root cause
of the break was isolated to his code changes. This is evident from the
debugging procedure he described. And that distinction is very important,
because that detail, and not some abstract piece of philosophy, is also the
real source of the challenges that motivated describing the situation in a
blog post in the first place.

What I'm really trying to say is that the article isn't describing a situation
that relates to Gettier's idea at all. Gettier was talking about situations
where you can be right for the wrong reasons. The author was describing a
situation where he was simply wrong.

~~~
darawk
> His real belief was not exactly that the PR broke it, it's that the root
> cause of the break was isolated to his code changes. This is evident from
> the debugging procedure he described. And that distinction is very
> important, because that detail, and not some abstract piece of philosophy,
> is also the real source of the challenges that motivated describing the
> situation in a blog post in the first place.

Yes, but the exact same point can be made about the Gettier case. The problem
is inappropriately specified beliefs. The problem with _that_ is that it's
impossible ex-ante to know how to correctly specify your beliefs.

For instance, you could just say that the problem with the Gettier case is
that the person _really_ just believed there was a "cow-like object" out
there. Voila, problem solved! But the fact of the matter is that the person
believes there is a cow - just like this person believes that their PR broke
the app.

~~~
fhood
I think I agree with the parent. While this can be made into a Gettier case by
messing with the scope of the JTB (pull request broke it vs change broke it) I
don't think it really works as intended by the author, and it feels like a
poor example in a field teeming with much more straight forwards instances.

I can't simplify the explicit examples I have in my head enough to be worth
typing up, but the gist is that I can be correct about the end behavior of a
of a piece of code, but can be completely wrong about the code path that it
takes to get there. I have good reasons to believe it takes that code path.
But I don't know about signal handler or interrupt perhaps, that leads to the
same behavior, but does not actually use the code path I traced out.

This happens to me reasonably often while debugging.

------
thewarrior
I think it helps to look at the mind as a probabilistic survival engine than
some truth engine.

If there appears to be a cow in a random field the odds are extremely low that
someone put a papier mache cow there. If there’s something that has 50 %
chance of being a snake you panic and run because that’s a 50 % chance of
dying.

In the case of the authors bug yes the change he introduced had a good
probability of being the cause. However he could have increased the
probability by going back over commits and confirming that his exact commit
introduced the bug. Now the probability goes even higher. But it could still
be machine specific a cosmic ray or whatever but the odds are over whelmingly
low.

In practice causal reasoning also works in a probabilistic fashion.

I have a simple model saying that if a planes engine is on then it’s flying.
Its a single step probability and so it’s not very accurate in the real world.

I do a bunch of experiments and say a plane is flying if the engine is on and
air is going over it’s wings faster than a certain speed.

Now we have two correlations connected by a causal model that works in many
other cases. Hence the probability of it being correct rises.

But at the same time we should never mistake direct correlation for causality.
But in daily life it’s “good enough”.

~~~
dTal
This is an excellent comment - I'm just disappointed you managed to say it all
without the word "Bayesian". The base rate for paper mache cows in fields is
very low - one is perfectly justified in assigning a decent probability that a
field contains a cow, if one sees a cow-shaped object in it. If you are in a
part of the world that has a lot of cows in fields, you will presumably assign
an even higher probability. You might even say you're "sure" there's a cow in
the field, and act as such for everyday purposes. But don't be fooled - you're
not really sure. If someone offers to bet you ten dollars versus your life
that there is a cow in the field, you'll likely not take the bet.

It seems that the philosophers were grasping towards a definition of "know"
that encapsulated the idea of assigning 100% probability to something, after
incorporating all the evidence. From a Bayesian standpoint, this is
impossible. You can never be 100% certain of anything. To "know" something in
the sense of "a justified, true belief" is impossible, because a belief is
never both 100% and justified.

(Note that it is entirely possible to see the paper mache cow and conclude
that it is likely only paper mache and that there is _not_ a real cow in the
field. Is this belief "justified"?)

~~~
dTal
I've thought a bit more about it, and concluded that while the above is a neat
_answer_ , it doesn't explain the question, and [thewarrior]'s remarks were
nearer the mark on that. So here goes.

It's tempting to think of "knowledge" as some relationship between the mind
and a single fact. But when we use the word "knowledge", what we _actually_
mean is "an accurate world model" \- a _set_ of beliefs. This is the
disconnect that Gettier cases are designed to expose - they construct
scenarios where someone's mental model is inaccurate or incomplete, yet by
sheer luck produce a single, cherry-picked correct prediction. We are
uncomfortable calling these correct predictions "knowledge" because as soon as
you start probing the rest of the mental model, it falls apart. Sure, they
think there's a cow in the field, and there really is one. Ask them any more
questions about the scenario though ("what color is the cow?") and they'll
give wrong answers.

From this perspective, "knowledge" as a "justified, true belief" is a
perfectly coherent concept - the problem lies with the inadequacy of the word
"justified" to describe the output of a complex decision procedure that
incorporates many beliefs about the world, such that it could be expected to
yield many other correct predictions in addition to the one in question, up to
some arbitrary threshold.

A thought experiment - suppose you tell the observer that the cow they see is
made of paper mache. They no longer believe there is a cow in the field.
Intuitively, has their knowledge increased or decreased?

------
austincheney
This is a lesson learned well from open-ended systems. An open-ended system is
one where input is received to process, but the input is not well defined. The
more accepted unknown input becomes the more open the system must be in its
rules to process it. The results of processing are:

* expected output from a known input (intention)

* unexpected output from a known input (defect)

* expected output from an unknown input (serendipity)

* unexpected output from an unknown input (unintentional)

For example I maintain a parser and beautifier for many different languages
and many different grammars of those languages. In some cases these languages
are really multiple languages (or grammars) imposed upon each other and so the
application code must recursively switch to different parsing schemes in the
middle of the given input.

The more decisions you make in your application code the more complex it
becomes and predicting complexity is hard. Since you cannot know of every
combination of decisions necessary for every combination of input you do your
best to impose super-isolation of tiny internal algorithms. This means you
attempt to isolate decision criteria into separated atomic units and those
separated atomic units must impose their decision criteria without regard for
the various other atomic decision units. Provided well reasoned data
structures this is less challenging than it sounds.

The goal in all of this is to eliminate _unintentional_ results (see forth
bullet point above). It is okay to be wrong, as wrong is a subjective quality,
provided each of the atomic decision units are each operating correctly. When
that is not enough you add further logic to reduce the interference of the
various decision units upon each other. In the case of various external
factors imposing interference you must ensure your application is isolated and
testable apart from those external factors so that when such defects arise you
can eliminate as much known criteria as rapidly as possible.

You will never be sure your open-ended system works as intended 100% of the
time, but with enough test samples you can build confidence against a variety
of unknown combinations.

~~~
chii
How is 2 & 3 from above different from 4?

An unknown input producing correct results is still a problem - the unknown
input is the problem.

Therefore, i postitulate that anytime an unknown input is possible, the
software is defective.

~~~
pjc50
The entire world of AI relies on dealing with "unknown" input?

~~~
gnode
I would say yes.

There's a saying that when people figure out how to make a computer do
something well, that it's no longer in the field of AI. I'd say there's some
truth in this, in that for many problems we have solved well (e.g. playing
chess), the intelligence is not that of the machine, but of the programmer.

I think that in order for a machine to genuinely be intelligent, it must be
capable of original thought, and thus unknown input. Known doesn't necessarily
mean specifically considered, but that it could be captured by a known
definition. As an example, we can easily define all valid chess moves and
checkmates, but we can't define the set of images that look like faces.

------
stareatgoats
The propensity for mistaking belief for facts certainly take daily hits as a
software developer. "How come this simple thing isn't working? I thought of
everything didn't I?". After a while you are forced to realize that belief
isn't the same as reality.

It seems insights like this don't easily translate into other domains though,
like relationships, dearly held political views etc. We prefer to think of
them as based on facts, when in all probability they are merely beliefs
fraught with assumptions.

Some people might be good at being skeptics in all areas, but I sense most
share my own ineptitude here, the reason probably being that any such
(incorrect) beliefs don't immediately come back and bite us, as in
programming.

~~~
james_s_tayler
The funny thing about development is what's say 90% of the time you are
convinced everything is correct and should be working and it's immensely
frustrating because it's not, so you know you are wrong but are unable to
offer yourself a more convincing theory and just get stuck until something
clicks. But then there's that 10% of the time where you're actually right. And
you don't know which one it's going to be. So you have to calm yourself down
like "I know I think Im right about this but clearly Im not" but at the same
time you have to hold onto that conviction because you're right damnit. Haha.

~~~
hopler
As Raymond Smullyan proved, everyone is either inconsistent or conceited.

~~~
james_s_tayler
[http://www.mit.edu/people/dpolicar/writing/prose/text/episte...](http://www.mit.edu/people/dpolicar/writing/prose/text/epistemologicalNightmare.html)

Reading this made my day.

------
austinjp
The "cow in the field" example reminds me of two heuristics I like: am I right
for the wrong reason?; am I wrong for the right reason?

Being right for the wrong reason is dangerous: it's not easy to spot, and it
perpetuates false sense of security leaving "black swan events" unanticipated.
This might occur during debugging as the article points out, or e.g. during
A/B testing of a product.

Bring wrong for the right reason is just plain frustrating.

~~~
chii
> Bring wrong for the right reason is just plain frustrating.

what's an example of being wrong for the right reason? I can't think of any
cases where this happens...

~~~
reinhardt1053
In the context of political forecasting, imagine that you are a defence chief
who is faced with an unquantifiable external threat, as the US was by Russia
during the Cold War. You can predict that this enemy is a very great threat,
or you can say that it isn’t much of a threat, but the outcomes are
asymmetric. If you say that the threat is a grave one, and strengthen your
defences accordingly, then if the enemy attacks, you were clearly right to
take the threat seriously. If the enemy doesn’t attack, you were still right,
because you can say that the enemy only didn’t attack because of the action
you took. On the other hand, if you dismiss the threat as insignificant, and
the enemy attacks, then at best your career comes to a sudden and unpleasant
end. So therefore, it is always right to over-emphasise the threats, and if
you turn out to be wrong, you were wrong for the right reason.[1]

[1] [https://wiseinvestment.co.uk/news/antiques-roadshow-tony-
yar...](https://wiseinvestment.co.uk/news/antiques-roadshow-tony-yarrow-
november-2014)

~~~
chii
> always right to over-emphasise the threats, and if you turn out to be wrong,
> you were wrong for the right reason.

if the audience is not receptive to the concept of opportunity cost, then yes.
Unfortunately, a majority of people over-estimate the need for security and
thus, allow themselves to be fooled into believing that this over-emphasis, no
matter the cost, is justified.

Just look at the TSA!

------
phkahler
>> He called them “gettiers” for short. So we used to talk about gettiers all
the time, no doubt in part just because it felt clever to talk about them, but
also because when you’re a programmer, you run into things that feel like
Gettier cases with unusual frequency.

Sometimes I think that is what philosophers are doing - feeling clever -
perhaps as a defense against some negative inner problem (psychology is an
outgrowth of philosophy after all). The whole cow story stinks of telling
someone "you're right, but you're also WRONG! Your perception of reality is
BROKEN!". To me knowledge is simply having a model of the world that can be
used to make useful predictions and communicate (and some other things). Aside
from that, it doesn't matter if your model is "grounded in reality" until it
fails to work for you, at which time it can be helpful to realize your
knowledge (model) needs adjustment.

One way to resolving the authors first software issue would be to check a diff
between what he committed and the previous production revision - this would
quickly uncover the changes he "didn't make". This is an old lesson for me -
what I changed may not be limited to what I think I changed. It's a lesson in
"trust but verify". There are any number of ways to view it, but in the end we
only care about ways that lead to desired outcomes weather they're "right" or
not.

On a related note, I've found that software is one of the _only_ places where
there is a "ground truth" that can be examined and understood in every detail.
It's completely deterministic (given a set of common assumptions). I've found
the real world - and people in particular - to not be like that at all.

~~~
thejohnconway
> Sometimes I think that is what philosophers are doing - feeling clever -
> perhaps as a defense against some negative inner problem (psychology is an
> outgrowth of philosophy after all).

All science is an outgrowth of philosophy.

It's very frustrating when people look the obviously trivial and sometimes
silly examples that philosophers use to elucidate a problem, and take it to
mean that they are interested in trivial and silly things. Being right for the
wrong reasons is a common and difficult problem, and some if the solutions to
it a really insightful and powerful ideas.

> Aside from that, it doesn't matter if your model is "grounded in reality"
> until it fails to work for you, at which time it can be helpful to realize
> your knowledge (model) needs adjustment.

It might matter a great deal if your model is not grounded in reality - there
are situations where that can kill you. It also seems like one of the
fundamental aims of science, to have theories fail less often.

------
pjc50
There's a much closer analogy from software development to the cow story. The
cow story is confusing because the cow that you see (A) is fake, but the real
one (B) you don't know about. So your belief is not a justified true belief
because although the real cow exists, the one your knowledge refers (A) to
isn't the real one (B).

An intertemporal variant of this is race conditions. There have been lots of
problems of the form "(1) check that /tmp/foo does not exist (2) overwrite
/tmp/foo"; an attacker can drop a symlink in between those and overwrite
/etc/password. The file that you checked for is not the same file as you wrote
to, it just has the same _name_. This is an important distinction between
name-based and handle-based systems.

~~~
daveFNbuck
If the real cow (B) was not present, your belief that there was a cow in the
field would be justified but not true. Seeing a the fake cow (A) justifies
your belief that there's a cow in the field. Adding a real cow (B) that you
can't see doesn't remove the justification.

~~~
pjc50
Good point, I've edited "justified" to the full phrase "justified true belief"

~~~
daveFNbuck
If the belief is both justified and true, how is it not a justified true
belief?

------
pwaivers
> _I could have had a JTB that the code change had caused the emails to stop
> delivering, but still we wouldn’t want to say I “knew” this was the cause,
> because it was actually the service outage that was directly responsible._

He is wrong and this is not a gettier in any way. "The code change had caused
the emails to stop delivering" is not a JTB, because it is not true. Rather it
was that the email server went down.

~~~
jemfinch
He simply wasn't speaking with precision. If you replace "code change" with
"pull request" in his statement, it's JTB.

~~~
pwaivers
No you're talking about the first example. On the second example he says

> _But—gettier!—the email service that the code relied on had itself gone
> down, at almost the exact same time that the change was released._

So the error was caused by the email service going down, which is completely
independent of the code change/pull request.

------
agarden
I don't see any of his examples as Gettier cases. He thought his code caused
the autofocus problem; it didn't. He thought someone else's push had broken
email, but instead the service happened to go down at the same time. A proper
Gettier case would be when you write code that you believe to be correct and
it does work, but not for the reasons you think it does. Often this eventually
bites when some edge case arises.

I run into this fairly often playing chess. I calculate out some tactic,
decide it works, and play the first move. But my opponent has a move I
overlooked which refutes the line I was intending to play. Then I find a move
that refutes his, and the tactic ends up working in the end anyway, just not
for the reasons I thought it would.

~~~
theoh
Here's an example that comes to mind:

A programmer writing a function refers to a local variable, "status", but
thinks they are referring to a global variable. The code works by chance
because the variables happen to have the same (fixed) value.

The variable shadowing means that the programmer could quite plausibly be
confused and believe that they were accessing the global variable
("justification"). "I know I checked the status variable, like I was supposed
to".

------
coldtea
> _A philosopher might say that these aren’t bona fide Gettier cases. True
> gettiers are rare._

I beg to differ. Besides the examples in programming the author gave, I can
very easily think of examples in medicine, police work (e.g. regarding
suspects), accounting, and so on...

~~~
yesenadam
Well, why not write them down for us?!

~~~
coldtea
As if they're difficult to derive on one's own?

You believe X has cancer because he has the symptoms and you can see an
offending black spot on their X-ray.

The lab results say the black spot was just a cyst but X indeed has cancer in
the same organ.

~~~
gjm11
That sounds to me like (1) not a clear example of a Gettier case and (2)
something that would, in fact, be rare.

#1 because one of the reasons for your believing X has cancer is that "he has
the symptoms", which (if I'm understanding your example correctly) _is_ in
fact a consequence of the cancer he actually has; so, at least to some extent,
your belief is causally connected to the true thing you believe in just the
way it's not meant to be in a Gettier case.

#2 because (so far as I know) it's _not at all common_ to have both (a) cancer
in an organ that doesn't show up in your X-ray (or MRI or whatever) and (b) a
cyst in the same organ that looks just like cancer in the X-ray/MRI/whatever.
I'm not a doctor, but my guess is that this is _very unusual indeed_.

So this isn't a very convincing example of how clear-cut Gettier cases aren't
rare: it's neither a clear-cut Gettier case nor something that happens at all
often.

~~~
coldtea
In the general case it's "I believe X based on signs that would imply X
correctly, and I happen to be correct that X holds, but I misread the signs I
used to come to the conclusion".

I don't think this is rare -- including in the version of my example.

The only reason there's arguing that it's not a "clear cut case" is that I
mentioned seeing "symptoms". Ignore the symptoms I mentioned, as they are a
red herring, e.g. seeing the mark could cause the belief alone.

Other than that, it's a belief (1), that's justified (2), and true (3) --
while being accidentally justified.

Consider the case of a policeman that things someone is dangerous because they
think they seen a gun on them. So they shoot first, and lo and behold, the
suspect did have a gun on them -- but what the policeman seen was just a
cellphone or something bulky under their jacket.

Or the spouse that thinks their spouse is having an affair because they see a
hickey. Their spouse indeed has an affair (and even has a hickey on the other
side of the neck), but what their spouse saw was just some small bruise caused
by something else.

Or, to stick with the theme, figuring domestic abuse, and the victim suffers
that indeed, but your guess is based on a bruise they had from an actual fall.

------
dm03514
I can imagine a future where what's true generally describes itself (like
terraform on drugs for software :p) Imagine software that is fully self
descriptive and would no longer require engineers to individually interpret
what's happening because the software would tell us. The system would be a
graph of every single component and all possible connections between them, and
all variants of each component and state that it could be in. When we
introduce a change we would be aware with perfect information about the affect
to the states and the paths between them.

In the example the Mental Model was at a level too shallow, it should have
only affected the paths between the autofocus and the user. But the bug
necessitated a larger mental model (the author was considering too small
subsection of the graph).

I'd hope in the future we could reach a state where the program could have
detected that the frame refactor would have an affect on the autofocus and all
other components instead of being an implementation detail.

------
jungler
Although many folks have raised the "not Gettier" objection, I would propose
that the premise of applying the test to _debugging_ is wrong. Debugging means
that your assumptions were already faulty: otherwise the system would not have
bugs.

That is, the act of programming means working on an unfinished thought,
something that can reflect some beliefs but compromises on being an exactly
true expression of them. And so the weight of philosophical reasoning should
appear at design time. What occurs after that is a discovery phase in which
you learn all the ways in which your reasoning was fallacious - both bugs and
feature decisions.

------
perlgeek
> (Yes, I should have caught the bug in testing, and in fact I did notice some
> odd behavior. But making software is hard!)

How often have I noticed some "odd behavior" in testing, and later wasn't able
to reproduce it? Some nagging feeling that I broke something remained, but
since I've deployed a new version (that fixed something else), and I couldn't
reproduce the "odd behavior", I tricked myself into ignoring it.

And then I deployed to production, and shit hit the fan.

Now I try to pay more attention to those small, nagging feelings of doubt, but
it takes conscious effort.

------
scyclow
This reminds me of a recent event at WrestleKingdom 13, a Japanese
professional wrestling event where, as you might imagine, pretty much
everything is planned and choreographed ahead of time.

In the opening match, Kota Ibushi suffered a concussion. Some doctors came
out, carried him out on a stretcher, and took him to the back. As it turns
out, this was all planned. The doctors were fake, and this course of events
was determined ahead of time. But coincidentally, Ibushi _actually_ suffered a
real-life concussion in the match.

Wrestling always has an interesting relationship with reality.

------
robbrit
This is precisely why when dealing with bugs I advise juniors to avoid asking
the question, "what changed?" Gettier cases are just one problem that you can
face when asking that question.

Instead I usually tell them to do it the proper way: start from the bug, and
work backwards to understand why that bug is happening. At that point the
change that caused the bug becomes obvious, and most of the time we realize
that we probably wouldn't have come to that conclusion by looking just at what
changed.

~~~
RugnirViking
This is one of two approaches to the problem of debugging a system. It's
advantage is that assuming the programmer can focus on everything they have
seen in the debugger for long enough, they can find where the issue arises.

It's disadvantage is that as systems get larger, it can get exponentially more
time consuming. As programmers we sometimes learn tricks (read: assumptions)
to cut down this time, but in the end, the complexity of the system beats all
but the very best/most determined.

Consider tracing a bug in this manner through the entire code of something as
complicated as an operating system. Most of the code you did not write
yourself, and you have likely never seen before and no idea what it does. Each
new frame the debugger reaches you have to spend time understanding what is
happening before determining if this is where the problem occurs, and there
are so many frames that it can become difficult to sort through them all.

------
Antonio123123
This are called false positives and are a normal occurrence in the life of a
software developer. That's why when testing the root-cause you should test for
a false positive as well.

------
jeffreyrogers
I haven't read the original paper, so maybe the example is better, but it
seems the cow example fails the justified condition. The knowledge is
justified if it derives from the evidence, but once we know the evidence is
faulty it can no longer be used for justification by definition. It seems by
extension that any justified true belief can become unjustified by the
addition of new information that invalidates the justification on which the
alleged knowledge is based upon.

~~~
bunderbunder
What you're saying is more or less exactly what the paper was getting at.

It's hard to say based on a short Internet comment, but it sounds like the
spot where your disagreement comes from is that you're understanding the word
"justified" in a slightly different way from how epistemologists were using
it. For example, one of the responses to Gettier's paper was to suggest that
maybe the definition of "justified" should be altered to include a provision
that invalidating the justification would imply that the belief is false.

So, for example, under that modified definition, the visual evidence couldn't
serve as a justification of the belief that there is a cow in the field,
because it allows the possibility that it isn't a cow but there still is one
in the field. On the other hand, it _would_ work for justifying a belief like,
"I can see a cow from here." (Yeah, there's another cow in the field, but it's
not the one you think you see.) But, still, that wasn't quite the definition
that the mid-century epistemologists who made up Gettier's audience were
using.

(ETA: Also, the original paper didn't involve cattle at all. Wikipedia has
what looks like a good summary:
[https://en.wikipedia.org/wiki/Gettier_problem#Gettier's_two_...](https://en.wikipedia.org/wiki/Gettier_problem#Gettier's_two_original_counterexamples))

~~~
jeffreyrogers
Thanks, I think you're right about how I was understanding the word
"justified". I like bringing up philosophical disagreements on HN since it
often gets responses like yours :)

~~~
bunderbunder
Yeah, sorry, though, I realized after I posted that I failed to properly
acknowledge that you hit the nail on the head -- I picked that response to
Gettier specifically because it matched your criticism.

------
azinman2
Sounds like (from the cases presented) an over intellectualisation of
“coincidences.”

------
chasingthewind
Another really great work that's related to some of these concepts is "Naming
and Necessity" by the philosopher Saul Kripke.

[https://en.wikipedia.org/wiki/Naming_and_Necessity](https://en.wikipedia.org/wiki/Naming_and_Necessity)

It investigates how we assign names to things and what those names mean and
how we can reason about them.

------
joe_the_user
I know mathematical logic but I don't know much about conventional philosophy.
The Gettier argument seems to indicate that a system of false propositions
used to arrive at a conclusion could be called "justification" in normal
epistemology - that seems a bit disturbing. Being "justified" in believing X
means just having some reason, maybe utterly bogus but existing and convincing
to you, for believing X (and then X becomes knowledge even if X happens to be
true entirely different reasons). How could no one have commented previously
if that was how the system was?

Edit: the main practical implication of the argument seems to be that one
cannot assume that when you have argument for X and when you then you get
empirical evidence for X, you cannot take that a proof your argument for X is
true. It might be suggestive of the truth of the argument but the structure of
the argument also has to be taken into account. But that's been a given
scientific and statistical investigations for a long time.

------
mcgwiz
I initially thought this article might be about programming by coincidence [1]
or maybe about user experience superstitions [2], but after reading it I
wonder if this isn't just about the practice of debugging. Software is
complex. When someone begins investigation into a bug, if the fix is not
immediately found, it becomes a matter of running down a list of hypothetical
causes. As one's experience deepens, both the "running down" and the curating
of the "list" become more efficient. IMHO this article is merely about a
developer who was unaware of certain hypothetical causes.

1: [https://pragprog.com/the-pragmatic-
programmer/extracts/coinc...](https://pragprog.com/the-pragmatic-
programmer/extracts/coincidence)

2: [https://www.nngroup.com/articles/quality-assurance-
ux/](https://www.nngroup.com/articles/quality-assurance-ux/)

------
mlthoughts2018
I actually disagree with the Gettier thought experiment and don’t believe it
demonstrates anything interesting.

When you see the cow (but it’s really a convincing model), then in your mind,
there should be some probability assigned to a variety of outcomes. The main
one would be a cow, another one might be that you’re hallucinating, and so on
down the list, and somewhere the outcome of cow-like model would be there.

From that point you can go in at least two directions, one would be something
like a Turing test of the fake cow... beyond a certain point it’s a matter of
semantics as to whether it’s a real cow or not, or you could say that your
“justified true belief” had to apply to the total state of the field. If you
believed there was both a cow model and a cow behind it, that woukd be
justified, but the existence of the cow behind the model would not justify
incorrect belief that the model was a real cow, in the sense of not admitting
uncertainty over the things you see.

~~~
empath75
> From that point you can go in at least two directions, one would be
> something like a Turing test of the fake cow... beyond a certain point it’s
> a matter of semantics as to whether it’s a real cow or not, or you could say
> that your “justified true belief” had to apply to the total state of the
> field. If you believed there was both a cow model and a cow behind it, that
> woukd be justified, but the existence of the cow behind the model would not
> justify incorrect belief that the model was a real cow, in the sense of not
> admitting uncertainty over the things you see.

You're replacing the model it was criticizing with a different model and then
saying that it doesn't say anything interesting about your model, so it's not
interesting. It's not an argument that knowledge isn't possible, it was an
argument against the traditional definition of knowledge as it was almost
universally understood at the time.

~~~
mlthoughts2018
I’m saying the model it would like to criticize is not an interesting or
worthwhile model to talk much about.

------
gorpomon
It's a bit in the weeds, but I think the author has a wrong JTB. They author
deployed multiple changes, and just incorrectly assumed that their PR was the
one that introduced the bug. They author had incorrect knowledge about what
was being deployed. If something in their deploy process indicated that in
fact only their code was being deployed then perhaps it's a JTB? But otherwise
I think it's just a bit off.

However, the gist of it is correct. We often update dependencies or deploy
more than we think we do. We have an "us" focused view of our code, and
keeping gettier cases in mind helps us break out of that.

Just recently I kept thinking that I didn't know how to write a jest test,
when in fact I was using a version of Jest which didn't support a certain
method. It's easy to think it's our fault, when in fact there can be deeper
reasons.

------
osharav
Interesting to compare this with a simpler term - "Red herring" as something
that throws you off course.

------
newsbinator
These cases seem to come up often (weekly?) in software development. I wonder
how often they come up in other professions.

One common case is when you change or delete a comment, and suddenly something
breaks. It couldn't have been the comment... but it was working fine before my
edit... wasn't it?

~~~
chrismorgan
And then as you look closer, you wonder how it _ever_ worked. Hang on, _did_
it ever work?

~~~
Insanity
I lost quite a few hours trying to restore a feature after I made a commit,
only to find out that it was broken for weeks already. Or worse, was not even
implemented yet.

It's amazing how that just keeps on happening.

------
oh_sigh
I was surprised when I first learned that this was a novel philosophical
concept, because I recall reading a Renaissance-age Italian story(maybe from
the De Cameron?) that talks about this:

Basically, a man sees his wife walking in the town square with the hat of a
friend of theirs, and this leads him to believe that she is cheating with that
friend. It turns out that he just offered the hat to her in the market, to
help her carry some eggs home, and she was going to return it. So, she goes
and returns it, the husband follows her, and it turns out she actually is
cheating on the husband with the friend, but the hat hat nothing to do with
it.

------
p2detar
This hits close to me as a possible reason why I could never get good at
solving geometry problems, solid geometry especially. Most problems would be
trivial when one assumes specific preconditions, but my mind was always
wandering around, looking at all potential sides of a problem and I could
never solve anything. To quote the author from my particular pov:

    
    
        a problem has multiple potential causes, and you have every reason to believe in one of them, even though another is secretly responsible.
    

Reminds me that I need to pick up a book and re-learn the damn thing. It
really saddens me that I suck at geometry.

~~~
azmodeus
I think you should try looking at geometry more like a creation initially then
a problem. In this manner you can see assumptions as just building up more
simple worlds with those constraints. I have found this view helps when
teaching geometry as it empowers the mind.

------
gweinberg
I don't understand how anyone could have ever thought "justified true belief"
was a good definition of knowledge, since the question of "what constitutes
justified belief?" is muddier than the question "what constitutes knowledge?"
in the first place. Further, even without considering such absurd situations
as a real cow hiding behind a fake cow, if you see something that, based on
its appearance, almost certainly is a cow, the near certainty doesn't change
into absolute certainty just because what appears to be a cow is in fact a
cow.

------
scottlamb
As usual, science has a more practical take on this. Occam's Razor says that
if you see a cow shape and hear a cow sound coming from that direction, the
most likely explanation is that you're seeing a cow. It retains the
possibility this isn't true; the belief can be falsified in several ways: by
examining the cow more closely, by the real cow walking out from being the
fake one, etc.

I think it follows that we never absolutely "know" something. We
asymptotically approach knowledge. The scientific method is a way of
approaching truth.

------
bryanrasmussen
I don't think the example of checking in someone else's bug is a very good
one. If I checked in something that had code from other people I would see it
very easily, and if I did not think there was any way my code should have
affected the autocomplete then I would assume the code I checked in which was
not mine broke the autocomplete.

Matching it to the example of the papier mache cow doesn't really work because
the papier mache cow hides the real cow but it is very easy to see that your
code was also checked in with other people's code.

------
voidpointer
Somehow, the examples all sound more like a quality and/or testing issue. The
workflow seems prone to people rebasing to a buggy state and at that point, in
a non-trivial system, all bets are basically off. Basically I need to be able
to have a "JTB" about a pull request having undergone enough review and
testing before being merged on the master that it doesn't introduce such
glaring regressions as cited in the examples. If that cannot be ensured, I'm
setting myself up for one goose chase after the other...

------
mindwork
I learned this word, but I'm scared and wondered by it every time. I think it
fits here. There is two worlds: believed and true. And when they merge it's
called "peripeteia"

~~~
Rzor
I read the Wikipedia's entry. It seems that it could be better explained as a
"plot twist": when your confidence in an outcome is overturned, for better or
worse.

------
pitermarx
It might be true that even though I have a JTB about something I might be
wrong.

Nevertheless, I think It would be resonable to act upon a JTB as it it was
true. For all effects it is true to the best of my knowledge. This does not
mean I shun down new information that might make me to change my JTB.

And if having a JTB is not knowledge, what is? What can we know? We can always
imagine a world where even our most firm JTB might be false. If a JTB is not a
good case to use the word knowledge I don't know what is

------
aaroninsf
Always bemused and not a little confused that anyone (most notably, Americans)
still spend so much time and energy on analytical philosophy [and its Quine-y
assertions about semantics] so many decades after its sort of formal semantics
collapsed as a useful way of analyzing natural language.

Linguistics (not to mention, comp lit or continental philosophy) departments
have an order of magnitude more to say about meaning in natural language and
have had for... decades and decades.

I just don't get it.

------
anthonyriggi
I don't like this approach. If everyone were to approach a problem with this
mentality it would conjure doubt in the entire process. Nothing would ever get
done. Question: "how do we know if anything exists?" <\--(an extreme example).
Answer: "well we don't but it doesn't help us with the realities of the
problem at hand." I think this idea introduces confusion and does more harm
than good, in my opinion.

------
ppeetteerr
This reminds me of how some studies are proven to be true but the reason they
are true are not the reasons the author of the study presents. Instead, the
true-ness is either a coincidence or a correlation, not causation. These
people go on to write books and entire industries are formed around these
hypothetical truths and it takes years to undo the damage of the original
study (e.g. marshmallow test, jam experiment)

------
karmakaze
So just to be clear, a 'gettier' is when something you Believe and have
Justification for turns out to be false?

    
    
      Actually True
      J B
      1 1 knowing
      1 0 denying
      0 1 lucky hunch
      0 0 sceptic, default position
    
      Actually False
      J B
      1 1 mis-justification: un/incorrectly-verified 
      1 0 lucky denial of mis-justified
      0 1 superstition
      0 0 sceptic, default position

~~~
hopler
No, it's when you get the right answer for the wrong reason, like a math
problem solution that has two mistakes that cancel out in the end result, but
are still incorrect logic

------
ozy
Note most gettier cases are plays on our intuitions:

What feels like a pointer is actually a category. That is, it feels like it
points to one, but it points to many. Like both examples given here:
[https://en.wikipedia.org/wiki/Gettier_problem](https://en.wikipedia.org/wiki/Gettier_problem)
.

~~~
hopler
For the C heads, you mean what feels like a pointer is actually an array? :-)

------
veddox
I had a philosophy lecture last year that included a lot of epistemology
(Theory of Knowledge). We talked a fair bit about justified true beliefs, but
Gettier only came up in a side note - the professor being more interested in
skepticism and the responses thereto. Never would have dreamt of applying that
lecture to programming, though.

------
isacikgoz
The article was great. I also find philosophy and software correlates
occasionally. The response of the author to the gettier cases are expected
behavior. In fact it is a blessing. 99% of the time there is a cow, we
experienced that before and that’s the reason of our confidence. We easily
solve our problems with this approach.

------
golergka
The first example is exactly the reason why I hate rebases and prefer merges
and complicated history instead. It may be more complicated, but it doesn't
swipe problems under the rug.

~~~
hopler
That's why in some quarters it's called git _debase_.

------
callesgg
My own opinion is that knowledge is always relative to a perspective. It is
only valid in a context.

Nothing is absolute.

Example:

1=1 is something i know is true cause i know the rules of mathematics. There
is no absolute truth to that.

~~~
gerbilly
10+3 = 13 when talking about coconuts, but 10+3 = 1 when talking about time[1]

[1] In America anyways.

------
amelius
As software abstractions often put up facades in front of similar
abstractions, this is bound to happen to us software developers.

------
sunstone
Didn't Descartes cover this with "I think therefore I am."? Everything else is
varying degrees of speculation.

------
thefringthing
My reaction Gettier cases as a philosophy minor was that J is the only
philosophically interesting part of JTB.

------
paulsutter
“Justified true belief”? All our knowledge is subjective by definition. We
don’t even know whether we’re living in a simulation.

Personally I doubt that we’re living in a simulation. But the fact that we
could be, demonstrates that we don’t have objective knowledge. No cows needed
in the field explain it

Philosophy might better be called “the history of flawed thinking”

~~~
gnode
> We don’t even know whether we’re living in a simulation.

I think it's actually worse than this. This scenario suggests that our minds
are capable of infallible reasoning yet we may not be able to trust our
observations. Really, I don't think we can even trust our own mind, and
therefore JTB is undefinable.

~~~
yetanotherjosh
This strikes me somewhat similar to saying that we should not use Newtonian
mechanics, because we know better that at subatomic level Newton's laws do not
apply.

But Newtonian mechanics are still extremely valuable, worth discovering and
understanding, and anyone catching a baseball is employing them quite
proactively. JTBs are the Newtonian mechanics of epistemology. You can pick
them apart at a deeper level and show how they don't really exist, but they
are still incredibly useful.

~~~
gnode
I'm not saying that we should entirely disregard epistemology. After all,
belief is necessary to make decisions, and thus to thrive. Although I think
people should be mindful that knowledge may not be possible in actuality, just
as modern physicists are mindful of relativity and quantum mechanics.

------
RootKitBeerCat
So just more simply “red herring”

------
rdlecler1
Isn’t this just a confounding factor?

------
nga_
Is not this same as coincidence?

------
bloak
To me this seems unhelpful. I'd say there is no "knowledge"; there's only
belief. And if you defined knowledge as "justified true belief" then you
couldn't apply the definition in practice in the real world because you don't
know when something is true. But that's philosophy for you: fun (for some
people) but not useful.

~~~
yesenadam
In an epistemology class once I said to the lecturer "I don't know anything."
He said "Don't you know your name?"

~~~
serpix
Not good enough, point to that which knows. Where is it? Who knows it? If
there is a knower to that, where is it.. Keep going with this investigation
and realize there is no knower, only knowing.

Eastern philosophy has nailed this thousands of years ago and we westerners
are up to this day totally in the dark. We actively treat the I as a concrete
object that really exists as an entity. It does not hold any closer
examination and evaporates entirely the closer it is questioned.

~~~
mbrock
I know my name just like I make a cup of tea. It's not you who makes my cup of
tea, right, so who is it? Well, me—the referent of my name, this person right
here.

That's just part of how our language works. It doesn't seem to matter whether
I am a "concrete object" or some swirly pattern of becoming or indeed even an
illusion! The English word "I" does not refer to an eternal soul or "atman."

If you stare long enough at an ice cream you'll have the marvelous insight
that in reality there is no concrete ice cream entity, not least because it
melts. Yet people don't go around saying "wake up, there are no ice creams!"
Why is that?

