At least as presented, I see the idea being used to do more harm than good.
Take the first example, with the form not autofocusing. We're already not in a Gettier case, because the author didn't have JTB. The belief that he caused the bug obviously wasn't true. But it wasn't justified, either. The fact that he had rebased before committing means that he knew that there were more changes than just what he was working on between the last known state and the one in which he observed the defect. So all he had was a belief - an unjustified, untrue belief.
I realize this may sound like an unnecessarily pedantic and harsh criticism, but I think it's actually fairly important in practice. If you frame this as a Gettier problem, you're sort of implying that there's not much you can do to avoid these sorts of snafus, because philosophy. At which point you're on a track toward the ultimate conclusion the author was implying, that you just have to rely on instinct to steer clear of these situations. If you frame it as a failure to isolate the source of the bug before trying to fix it, then there's one simple thing you can do: take a moment to find and understand the bug rather than just making assumptions and trying to debug by guess and check.
tl; dr: Never send philosophy when a forehead slap will do.
Belief: The pull request broke the search field auto focus.
Truth: The pull request did break it. There was an additional reason beyond the pull request unknown to the author, but that's not important to the Truth portion here.
Justified: This is the only one you can really debate on, just as philosophers have for a long time. Was he justified in his belief that he broke autofocus? I think so based on the original JTB definition since there is clear evidence that the pull request directly led to the break rather than some other event.
I think that when claiming it's not a JTB you're choosing to focus on the underlying (hidden) issue(s) rather than what the author was focusing on, which is kind of the whole point of Gettier's original cases. For example Case I's whole point is that facts unknown to Smith invalidate his JTB. In this programming example, facts unknown to the author (that someone else introduced a bug in their framework update) invalidate his JTB as well.
What I'm really trying to say is that the article isn't describing a situation that relates to Gettier's idea at all. Gettier was talking about situations where you can be right for the wrong reasons. The author was describing a situation where he was simply wrong.
Yes, but the exact same point can be made about the Gettier case. The problem is inappropriately specified beliefs. The problem with that is that it's impossible ex-ante to know how to correctly specify your beliefs.
For instance, you could just say that the problem with the Gettier case is that the person really just believed there was a "cow-like object" out there. Voila, problem solved! But the fact of the matter is that the person believes there is a cow - just like this person believes that their PR broke the app.
I can't simplify the explicit examples I have in my head enough to be worth typing up, but the gist is that I can be correct about the end behavior of a of a piece of code, but can be completely wrong about the code path that it takes to get there. I have good reasons to believe it takes that code path. But I don't know about signal handler or interrupt perhaps, that leads to the same behavior, but does not actually use the code path I traced out.
This happens to me reasonably often while debugging.
The idea that software has 'gettiers' seems accurate and meaningful. To some degree, making and maintaining gettiers is in fact the point of black-boxing. Something like a well-implemented connection pool is designed to let you reason and code as though the proxy didn't exist. If you form beliefs around the underlying system you'll lack knowledge, but your beliefs will be justified and ideally also true.
(One might argue that if you know about the layer of abstraction your belief is no longer justified. I'd argue that it's instead justified by knowing someone tried to replicate the existing behavior - but one form of expertise is noticing when justified beliefs like that have ceased to be true.)
And yet this story isn't about facades breaking down, it's just a common debugging error. Perhaps the precise statement the author quotes is a true and justified, but the logic employed isn't. And it's an important difference: being aware of environment changes you didn't make is a useful programming skill, while being aware of broken abstractions and other gettier problems is a separate useful skill.
"When I released the new version, I noticed that I’d broken the autofocusing of the search field that was supposed to happen on pageload."
That's it. That's the belief - he broke autofocusing when he released the new version. This was true. The later digging in to find the root cause is merely related to this belief. And yes I agree that Gettier's cases were meant to show that correct belief for the wrong reasons (maintaining the three criteria essentially), but this case meets that intent as well. The author is correct that he broke autofocus via his pull request, and thus JTB holds, but the actual reason for it is not his personal code and thus the Knowledge is incorrect.
In software, for known behavioral specs, you don't have a real justification until you write a formal proof. Just because formal proofs are uneconomical doesn't mean there's some fundamental philosophical barrier preventing you from verifying your UI code. Doing so is not just possible, there are even tools for doing it.
So really, this is not a Gettier case, because in formal systems a true justification is possible to construct in the form of a mathematical proof.
An example of a Gettier case in a software system would be formally verifying something with respect to a model/spec that is a Gettierian correct-but-incorrect model of the actual system.
There are almost no software systems where we have true JTB (proofs), so there are almost none where Gettier cases really apply.
Uncertainty in software is more about the economics of QA than it is about epistemology or Gettier style problems, and that will remain true until we start writing proofs about all our code, which will probably never happen.
In particular, I think your criteria for justification is too low. The standard for justification is - however much is necessary to close off the possibility of something being not-wrong.
I find the JTB concept to be useful to reminder us (1) that the concept of knowledge is an ideal and (2) how vulnerable we are to deception.
As an idea survives rounds of falsification, we grow confidence that it is knowledge. But, as Descartes explained in the evil demon scenario, there is room for doubt in virtually everything we think we know. The best we can do is to strive for the ideal.
This is borderline self-referential with respect to the whole Knowledge definition, though. If you have enough information to remove the possibility of a belief being not-wrong then there's no point in defining Knowledge at all. The whole debate around the definition is that humans have to deal with imperfect information all the time, and deciding what constitutes Knowledge in that environment is a challenge.
> At some point, I did a routine rebase on top of this change (and many other unrelated changes).
> (Yes, I should have caught the bug in testing, and in fact I did notice some odd behavior. But making software is hard!)
The significance of Gettier problems as we investigated it is the exposure of an entirely _wrong_ mode of philosophy: philosophizing by intuition. Ultimately, the reason Gettier problems are significant is because for philosophers, the textbook Gettier problem works because _for philosophers_ the problem captures their intuitions of knowledge, and then proves the case of knowledge fails.
Most normal people (i.e., not philosophers) do not have the same intuitions.
After Gettier analytical philosophers spent decades trying to construct a definition for knowledge that revolved around capturing their intuitions for it. Two examples are [The Coherence Theory of Knowledge] and [The Causal Theory of Knowldege]. Ultimately nearly all of them were susceptible to Gettier-like problems. The process could be likened (probably) to Goedel's Incompleteness proof. They could not construct a complete definition of knowledge for which there did not exist a gettier-like problem.
Eventually, more [Pragmatic] and [Experimental] philosophers decided to call the Analytical philosophers bluff: [they investigated if the typical philosopher's intuition about knowledge holds true across cultures]. The answer turned out to be: most certainly not.
More pragmatic epistemology cashes out the implicit intuition and just asks: what is knowledge to us, how useful is the idea, etc. etc. There's also a whole field studying folk epistemology now.
And to concretely tie this directly back to software:
Intuition is a wonderful thing. Once you have acquired knowledge and experience in an area, you start getting gut-level feelings about the right way to handle certain situations or problems, and these intuitions can save large amounts of time and effort. However, it’s easy to become overconfident and assume that your intuition is infallible, and this can lead to mistakes.
One area where people frequently misuse their intuition is performance analysis. Developers often jump to conclusions about the source of a performance problem and run off to make changes without making measurements to be sure that the intuition is correct (“Of course it’s the xyz that is slow”). More often than not they are wrong, and the change ends up making the system more complicated without fixing the problem.
I generally think the point is that you oftentimes have beliefs about why something broke and that it is important to check your beliefs.
But then we can get back to a better Gettier case if we suppose that Boston Dynamics happens to be testing their new Robo-Cow military infiltration unit in the field at the time.
In the scientific method you can never prove hypotheses, you can only disprove them. 'nuff said.
I find the Gettier problem entirely uninteresting myself. Someone elsethread explained it in the context of technical discourse in philosophy, in which case I could maybe appreciate the issue as a way to test and explore epistemological theories in a systematic, rigorous manner, even if for many theories it's not particularly challenging to overcome.
It's all very well dismissing it an inconsequential, but several generations of philosophers and scientists have grown up in the post-Gettier world, before which the Justified True Belief account was widely considered to be unassailable. Yes _now_ we all know better and are brought up knowing this, but Gettier and his huge influence on later thought is the reason why and it's just that not many people are aware of this.
Positivism was already well developed long before 1963.
I could on. I've studied enough philosophy to be comfortable with my criticism. But I'll grant that it may not have been until after 1963 that all these philosophical strains became predominate. But that doesn't excuse people from not seeing what's coming.
I suppose the point is just that having a justified true belief purely by chance, almost certainly depends on the justification itself being a false belief, even in true Gettier cases.
Anyway, I find the whole thing fairly unmysterious - I just take 'know' to be a particularly emphatic form of 'believe', and I like your conclusion.
In many ways, programmers need to be as fussy in their statements as philosophers. Since computers are stupid, and do exactly what you specify (usually...) it is important to pay close attention to these exact details. Assuming that the new code contains the bug is incorrect, and proper debugging requires careful attention to these details.
Ive certainly had bugs that were caused by some other, hidden factor like this, and typically the best way to find them is to carefully examine all your assumptions. These may be ugly questions like "is the debugger lying to me?" or "did the compiler actually create the code I intended it to make?" So while these may not be strict Gettier cases (and the author admits this in the article) they nevertheless are fairly common classes of similar problems, and framing them as such does provide a useful framework for approaching them.
Maybe this doesn't confirm strictly to the philosophical definition, but as an analogy I find it succinct and useful.
Many times I've fallen in the trap of absolutely _knowing_ in my head the reason things are the way they are, only to find out later I'm completely wrong. In most cases the evidence was right in front of me, but software is hard and complex and it is easy for even the best developers to miss the obvious.
Putting a term around it, having an article like this to discuss with the team, these are all useful for reinforcing the idea that we need to continually challenge or knowledge and question assumptions.
I know _that_ auto-complete is broken (I see it, the test fails)
I do _not_ know _why_ auto-complete is broken (some other dude did it)
But I still think it's very interesting to talk about if for no other reason than that it clarifies terms and usage.
If there appears to be a cow in a random field the odds are extremely low that someone put a papier mache cow there. If there’s something that has 50 % chance of being a snake you panic and run because that’s a 50 % chance of dying.
In the case of the authors bug yes the change he introduced had a good probability of being the cause. However he could have increased the probability by going back over commits and confirming that his exact commit introduced the bug. Now the probability goes even higher. But it could still be machine specific a cosmic ray or whatever but the odds are over whelmingly low.
In practice causal reasoning also works in a probabilistic fashion.
I have a simple model saying that if a planes engine is on then it’s flying. Its a single step probability and so it’s not very accurate in the real world.
I do a bunch of experiments and say a plane is flying if the engine is on and air is going over it’s wings faster than a certain speed.
Now we have two correlations connected by a causal model that works in many other cases. Hence the probability of it being correct rises.
But at the same time we should never mistake direct correlation for causality. But in daily life it’s “good enough”.
It seems that the philosophers were grasping towards a definition of "know" that encapsulated the idea of assigning 100% probability to something, after incorporating all the evidence. From a Bayesian standpoint, this is impossible. You can never be 100% certain of anything. To "know" something in the sense of "a justified, true belief" is impossible, because a belief is never both 100% and justified.
(Note that it is entirely possible to see the paper mache cow and conclude that it is likely only paper mache and that there is not a real cow in the field. Is this belief "justified"?)
It's tempting to think of "knowledge" as some relationship between the mind and a single fact. But when we use the word "knowledge", what we actually mean is "an accurate world model" - a set of beliefs. This is the disconnect that Gettier cases are designed to expose - they construct scenarios where someone's mental model is inaccurate or incomplete, yet by sheer luck produce a single, cherry-picked correct prediction. We are uncomfortable calling these correct predictions "knowledge" because as soon as you start probing the rest of the mental model, it falls apart. Sure, they think there's a cow in the field, and there really is one. Ask them any more questions about the scenario though ("what color is the cow?") and they'll give wrong answers.
From this perspective, "knowledge" as a "justified, true belief" is a perfectly coherent concept - the problem lies with the inadequacy of the word "justified" to describe the output of a complex decision procedure that incorporates many beliefs about the world, such that it could be expected to yield many other correct predictions in addition to the one in question, up to some arbitrary threshold.
A thought experiment - suppose you tell the observer that the cow they see is made of paper mache. They no longer believe there is a cow in the field. Intuitively, has their knowledge increased or decreased?
- We have a definition of what a cow is, and we know that cows are discrete/physical objects, and have relatively fixed locations (i.e. that they are not like an electron cloud with a probabilistic location).
- We assume that fields A and B in your hypothetical have clear, non-overlapping boundaries.
- We assume that we are working in a fairly normal universe with a fairly standard model of physics, and that due to the way time works in this universe, a cow cannot simultaneously be located in both fields A and B.
- (this could get really pedantic and go on forever)
The point is, even the things "we can know with certainty", are only as certain as the framework of observations/deductions/axioms/etc. that they rest upon. Almost nothing is certain on its own, without any further layers of reasoning behind it.
For example, Nassim Taleb has an argument of IQ being a single-dimentional assessment to a multi-dimensional domain.
I think it's more practical to have a possibility space (where unknown unknowns is a possibility). This removes the need to assess probabilities (which will probably be incorrect) while having being able to per-mutate through the list of possibilities. One can also do logical deductions, based on the possibility space, to assess possible strategies to explore/solve the issues at hand.
I see OP's comment not as denying knowledge, but rather as clarifying what is meant when people speak to each other of knowledge.
* expected output from a known input (intention)
* unexpected output from a known input (defect)
* expected output from an unknown input (serendipity)
* unexpected output from an unknown input (unintentional)
For example I maintain a parser and beautifier for many different languages and many different grammars of those languages. In some cases these languages are really multiple languages (or grammars) imposed upon each other and so the application code must recursively switch to different parsing schemes in the middle of the given input.
The more decisions you make in your application code the more complex it becomes and predicting complexity is hard. Since you cannot know of every combination of decisions necessary for every combination of input you do your best to impose super-isolation of tiny internal algorithms. This means you attempt to isolate decision criteria into separated atomic units and those separated atomic units must impose their decision criteria without regard for the various other atomic decision units. Provided well reasoned data structures this is less challenging than it sounds.
The goal in all of this is to eliminate unintentional results (see forth bullet point above). It is okay to be wrong, as wrong is a subjective quality, provided each of the atomic decision units are each operating correctly. When that is not enough you add further logic to reduce the interference of the various decision units upon each other. In the case of various external factors imposing interference you must ensure your application is isolated and testable apart from those external factors so that when such defects arise you can eliminate as much known criteria as rapidly as possible.
You will never be sure your open-ended system works as intended 100% of the time, but with enough test samples you can build confidence against a variety of unknown combinations.
An unknown input producing correct results is still a problem - the unknown input is the problem.
Therefore, i postitulate that anytime an unknown input is possible, the software is defective.
Another way to think about this is that the more open a system is the more useful and risky it is. It is useful because it can do more while tolerating less terse requirements upon the user. It increases risk because there is more to test and less certainty the user will get what they want. Part of that risk is that its hard to guess at what users want as sometimes the users aren't even sure of what they want.
You expect that people will move forward, left, or right.
You didn't expect people to try moving backward.
People start moving backward, but the software happens to do the right thing due to how it was written.
Is the software defective because of your missed expectation?
To be not defective, the software has to explicitly reject input that it was not designed to handle.
Imagine if the software updated with some changes, and the unknown input now produces an incorrect output. Is the defect introduced with the changes? Or was the defect always there?
In some cases that breaks forward capability. e.g. the case where there is an unknown XML tag. You could reject the tag or message. You'll end up rejecting all future change inputs.
If the whitelist of acceptable items is large, it may be acceptable to have a black list however if the above holds, you don't know what you don't know.
Turned out the bug had been latent in the code for 5+ years, predating me. Its data consumption had never been observed before because it was swamped by other data consumption. Changing the architecture to remove that other data brought it to the foreground.
(fwiw, the bug was caused by the difference between 0 and null as a C pointer!)
If you have alternate sensors, you should trust them them more, and camera systems less.
If you have a sunshade, you should deploy that.
If it is raining, or partially cloudy, the situation may change rapidly.
And perhaps you should slow down, but if you slow down too fast, other vehicles might not be able to avoid you.
It's not professional to design systems that rely on luck.
"Let's ignore this edge case and hope we get lucky" is not something you want to see in a software specification.
There's a saying that when people figure out how to make a computer do something well, that it's no longer in the field of AI. I'd say there's some truth in this, in that for many problems we have solved well (e.g. playing chess), the intelligence is not that of the machine, but of the programmer.
I think that in order for a machine to genuinely be intelligent, it must be capable of original thought, and thus unknown input. Known doesn't necessarily mean specifically considered, but that it could be captured by a known definition. As an example, we can easily define all valid chess moves and checkmates, but we can't define the set of images that look like faces.
There's a difference between "breaking" unknown input - i.e. non-computable within the system as it stands - and "working" unknown input, which is within expected working parameters.
The latter is business as usual for computing systems.
The former should have a handler that tries to minimise the costs of making a mistake - either by ignoring the input, or failing safe, or with some other controlled response.
It may not do this perfectly, but not attempting to do at all it is a serious design failure.
It seems insights like this don't easily translate into other domains though, like relationships, dearly held political views etc. We prefer to think of them as based on facts, when in all probability they are merely beliefs fraught with assumptions.
Some people might be good at being skeptics in all areas, but I sense most share my own ineptitude here, the reason probably being that any such (incorrect) beliefs don't immediately come back and bite us, as in programming.
Reading this made my day.
This is what the technique of Rubber Duck Debugging helps with. I wonder if you could translate it to other domains?
Somebody might know X. And they might know X for all the right reasons. But they probably didn't tell you those reasons, and you probably wouldn't believe them or understand them if they did.
I think that can be expanded to the whole human race.
Being right for the wrong reason is dangerous: it's not easy to spot, and it perpetuates false sense of security leaving "black swan events" unanticipated. This might occur during debugging as the article points out, or e.g. during A/B testing of a product.
Bring wrong for the right reason is just plain frustrating.
what's an example of being wrong for the right reason? I can't think of any cases where this happens...
if the audience is not receptive to the concept of opportunity cost, then yes. Unfortunately, a majority of people over-estimate the need for security and thus, allow themselves to be fooled into believing that this over-emphasis, no matter the cost, is justified.
Just look at the TSA!
Making the highest odds plays (and being able to figure out what they are) over and over again regardless of how the individual hands turn out is how you win.
Obviously, you can also win while making the wrong plays by getting lucky (right for the wrong reasons). Evaluating your play based on the outcome of the hands (did you win or lose) rather than the plays you made with the information you had at the time is called Results Oriented Thinking: https://www.pokerdictionary.net/glossary/results-oriented-th... ... and it is a pernicious mistake (with wider applications than just poker :).
My employer's stock is up 28% over the last year, S&P 500 is down 6%.
I guess my goal going in was to reduce variance, so I wasn't "wrong" about anything. 36% more money would be nice, though. (I haven't changed my strategy as a result)
Sometimes I think that is what philosophers are doing - feeling clever - perhaps as a defense against some negative inner problem (psychology is an outgrowth of philosophy after all). The whole cow story stinks of telling someone "you're right, but you're also WRONG! Your perception of reality is BROKEN!". To me knowledge is simply having a model of the world that can be used to make useful predictions and communicate (and some other things). Aside from that, it doesn't matter if your model is "grounded in reality" until it fails to work for you, at which time it can be helpful to realize your knowledge (model) needs adjustment.
One way to resolving the authors first software issue would be to check a diff between what he committed and the previous production revision - this would quickly uncover the changes he "didn't make". This is an old lesson for me - what I changed may not be limited to what I think I changed. It's a lesson in "trust but verify". There are any number of ways to view it, but in the end we only care about ways that lead to desired outcomes weather they're "right" or not.
On a related note, I've found that software is one of the only places where there is a "ground truth" that can be examined and understood in every detail. It's completely deterministic (given a set of common assumptions). I've found the real world - and people in particular - to not be like that at all.
All science is an outgrowth of philosophy.
It's very frustrating when people look the obviously trivial and sometimes silly examples that philosophers use to elucidate a problem, and take it to mean that they are interested in trivial and silly things. Being right for the wrong reasons is a common and difficult problem, and some if the solutions to it a really insightful and powerful ideas.
> Aside from that, it doesn't matter if your model is "grounded in reality" until it fails to work for you, at which time it can be helpful to realize your knowledge (model) needs adjustment.
It might matter a great deal if your model is not grounded in reality - there are situations where that can kill you. It also seems like one of the fundamental aims of science, to have theories fail less often.
The first two have all the problems philosophers talk about. But the last one does not. Not even underdeterminism, unless the system of the model is fundamental or fades into history or is a "wicked" problem.
An intertemporal variant of this is race conditions. There have been lots of problems of the form "(1) check that /tmp/foo does not exist (2) overwrite /tmp/foo"; an attacker can drop a symlink in between those and overwrite /etc/password. The file that you checked for is not the same file as you wrote to, it just has the same name. This is an important distinction between name-based and handle-based systems.
I run into this fairly often playing chess. I calculate out some tactic, decide it works, and play the first move. But my opponent has a move I overlooked which refutes the line I was intending to play. Then I find a move that refutes his, and the tactic ends up working in the end anyway, just not for the reasons I thought it would.
A programmer writing a function refers to a local variable, "status", but thinks they are referring to a global variable. The code works by chance because the variables happen to have the same (fixed) value.
The variable shadowing means that the programmer could quite plausibly be confused and believe that they were accessing the global variable ("justification"). "I know I checked the status variable, like I was supposed to".
He is wrong and this is not a gettier in any way. "The code change had caused the emails to stop delivering" is not a JTB, because it is not true. Rather it was that the email server went down.
I don't think this really affects the take home message from the piece, I'm just being pedantic that it doesn't parallel perfectly (which he even acknowledges people may say in the last paragraph).
> But—gettier!—the email service that the code relied on had itself gone down, at almost the exact same time that the change was released.
So the error was caused by the email service going down, which is completely independent of the code change/pull request.
I beg to differ. Besides the examples in programming the author gave, I can very easily think of examples in medicine, police work (e.g. regarding suspects), accounting, and so on...
Software engineering has many of these Gettier cases, because most software engineers do not follow the scientific method when investigating a problem!
You believe X has cancer because he has the symptoms and
you can see an offending black spot on their X-ray.
The lab results say the black spot was just a cyst but X indeed has cancer in the same organ.
#1 because one of the reasons for your believing X has cancer is that "he has the symptoms", which (if I'm understanding your example correctly) is in fact a consequence of the cancer he actually has; so, at least to some extent, your belief is causally connected to the true thing you believe in just the way it's not meant to be in a Gettier case.
#2 because (so far as I know) it's not at all common to have both (a) cancer in an organ that doesn't show up in your X-ray (or MRI or whatever) and (b) a cyst in the same organ that looks just like cancer in the X-ray/MRI/whatever. I'm not a doctor, but my guess is that this is very unusual indeed.
So this isn't a very convincing example of how clear-cut Gettier cases aren't rare: it's neither a clear-cut Gettier case nor something that happens at all often.
I don't think this is rare -- including in the version of my example.
The only reason there's arguing that it's not a "clear cut case" is that I mentioned seeing "symptoms". Ignore the symptoms I mentioned, as they are a red herring, e.g. seeing the mark could cause the belief alone.
Other than that, it's a belief (1), that's justified (2), and true (3) -- while being accidentally justified.
Consider the case of a policeman that things someone is dangerous because they think they seen a gun on them. So they shoot first, and lo and behold, the suspect did have a gun on them -- but what the policeman seen was just a cellphone or something bulky under their jacket.
Or the spouse that thinks their spouse is having an affair because they see a hickey. Their spouse indeed has an affair (and even has a hickey on the other side of the neck), but what their spouse saw was just some small bruise caused by something else.
Or, to stick with the theme, figuring domestic abuse, and the victim suffers that indeed, but your guess is based on a bruise they had from an actual fall.
I didn't write "one can easily" to imply I have some special talent to imagine such situations (and thus had motive to leave examples off to hide the fact that I don't).
I wrote it because I really do believe one can easily find such examples, and wasn't it even worthy to go into details (since I mentioned medicine, police work, etc, I thought the cases I implied where pretty clear too).
In any case, I gave 3 examples in a comment above.
In the example the Mental Model was at a level too shallow, it should have only affected the paths between the autofocus and the user. But the bug necessitated a larger mental model (the author was considering too small subsection of the graph).
I'd hope in the future we could reach a state where the program could have detected that the frame refactor would have an affect on the autofocus and all other components instead of being an implementation detail.
That is, the act of programming means working on an unfinished thought, something that can reflect some beliefs but compromises on being an exactly true expression of them. And so the weight of philosophical reasoning should appear at design time. What occurs after that is a discovery phase in which you learn all the ways in which your reasoning was fallacious - both bugs and feature decisions.
How often have I noticed some "odd behavior" in testing, and later wasn't able to reproduce it? Some nagging feeling that I broke something remained, but since I've deployed a new version (that fixed something else), and I couldn't reproduce the "odd behavior", I tricked myself into ignoring it.
And then I deployed to production, and shit hit the fan.
Now I try to pay more attention to those small, nagging feelings of doubt, but it takes conscious effort.
In the opening match, Kota Ibushi suffered a concussion. Some doctors came out, carried him out on a stretcher, and took him to the back. As it turns out, this was all planned. The doctors were fake, and this course of events was determined ahead of time. But coincidentally, Ibushi _actually_ suffered a real-life concussion in the match.
Wrestling always has an interesting relationship with reality.
Instead I usually tell them to do it the proper way: start from the bug, and work backwards to understand why that bug is happening. At that point the change that caused the bug becomes obvious, and most of the time we realize that we probably wouldn't have come to that conclusion by looking just at what changed.
It's disadvantage is that as systems get larger, it can get exponentially more time consuming. As programmers we sometimes learn tricks (read: assumptions) to cut down this time, but in the end, the complexity of the system beats all but the very best/most determined.
Consider tracing a bug in this manner through the entire code of something as complicated as an operating system. Most of the code you did not write yourself, and you have likely never seen before and no idea what it does. Each new frame the debugger reaches you have to spend time understanding what is happening before determining if this is where the problem occurs, and there are so many frames that it can become difficult to sort through them all.
It's hard to say based on a short Internet comment, but it sounds like the spot where your disagreement comes from is that you're understanding the word "justified" in a slightly different way from how epistemologists were using it. For example, one of the responses to Gettier's paper was to suggest that maybe the definition of "justified" should be altered to include a provision that invalidating the justification would imply that the belief is false.
So, for example, under that modified definition, the visual evidence couldn't serve as a justification of the belief that there is a cow in the field, because it allows the possibility that it isn't a cow but there still is one in the field. On the other hand, it would work for justifying a belief like, "I can see a cow from here." (Yeah, there's another cow in the field, but it's not the one you think you see.) But, still, that wasn't quite the definition that the mid-century epistemologists who made up Gettier's audience were using.
(ETA: Also, the original paper didn't involve cattle at all. Wikipedia has what looks like a good summary: https://en.wikipedia.org/wiki/Gettier_problem#Gettier's_two_...)
Edit: the main practical implication of the argument seems to be that one cannot assume that when you have argument for X and when you then you get empirical evidence for X, you cannot take that a proof your argument for X is true. It might be suggestive of the truth of the argument but the structure of the argument also has to be taken into account. But that's been a given scientific and statistical investigations for a long time.
It investigates how we assign names to things and what those names mean and how we can reason about them.
When you see the cow (but it’s really a convincing model), then in your mind, there should be some probability assigned to a variety of outcomes. The main one would be a cow, another one might be that you’re hallucinating, and so on down the list, and somewhere the outcome of cow-like model would be there.
From that point you can go in at least two directions, one would be something like a Turing test of the fake cow... beyond a certain point it’s a matter of semantics as to whether it’s a real cow or not, or you could say that your “justified true belief” had to apply to the total state of the field. If you believed there was both a cow model and a cow behind it, that woukd be justified, but the existence of the cow behind the model would not justify incorrect belief that the model was a real cow, in the sense of not admitting uncertainty over the things you see.
And it leads to a funny thing. You saw the model of a cow, and it make you believe that there is a cow in the field and that you saw a cow. Then you could find a heap of poo, and you will strengten your beliefs futher. You might find a lot of evidence and it all will be explained under assumption that you saw a cow. And this evidence will strenghten your belief that you will be licked in the face, when you come near the cow.
But you didn't saw the cow that made this heap of poo. The real cow is pitch black, with a horns of gigantic size and they are really sharp. The real cow has red glowing eyes and it is going to kill you. But before you see the real cow itself, all the evidence that would point that there is a cow would also reinforce the idea of soft tempered black and white cow. The longer you manage to keep youself oblivious to real cow traits, the more surprised you will become when you find the real cow.
You're replacing the model it was criticizing with a different model and then saying that it doesn't say anything interesting about your model, so it's not interesting. It's not an argument that knowledge isn't possible, it was an argument against the traditional definition of knowledge as it was almost universally understood at the time.
However, the gist of it is correct. We often update dependencies or deploy more than we think we do. We have an "us" focused view of our code, and keeping gettier cases in mind helps us break out of that.
Just recently I kept thinking that I didn't know how to write a jest test, when in fact I was using a version of Jest which didn't support a certain method. It's easy to think it's our fault, when in fact there can be deeper reasons.
One common case is when you change or delete a comment, and suddenly something breaks. It couldn't have been the comment... but it was working fine before my edit... wasn't it?
It's amazing how that just keeps on happening.
Basically, a man sees his wife walking in the town square with the hat of a friend of theirs, and this leads him to believe that she is cheating with that friend. It turns out that he just offered the hat to her in the market, to help her carry some eggs home, and she was going to return it. So, she goes and returns it, the husband follows her, and it turns out she actually is cheating on the husband with the friend, but the hat hat nothing to do with it.
a problem has multiple potential causes, and you have every reason to believe in one of them, even though another is secretly responsible.
Russian geometry problems, as I remember them, required creative thinking, which seem to be exactly the opposite to what you are describing.
I think it follows that we never absolutely "know" something. We asymptotically approach knowledge. The scientific method is a way of approaching truth.
Matching it to the example of the papier mache cow doesn't really work because the papier mache cow hides the real cow but it is very easy to see that your code was also checked in with other people's code.
Nevertheless, I think It would be resonable to act upon a JTB as it it was true. For all effects it is true to the best of my knowledge.
This does not mean I shun down new information that might make me to change my JTB.
And if having a JTB is not knowledge, what is? What can we know?
We can always imagine a world where even our most firm JTB might be false.
If a JTB is not a good case to use the word knowledge I don't know what is
Linguistics (not to mention, comp lit or continental philosophy) departments have an order of magnitude more to say about meaning in natural language and have had for... decades and decades.
I just don't get it.
1 1 knowing
1 0 denying
0 1 lucky hunch
0 0 sceptic, default position
1 1 mis-justification: un/incorrectly-verified
1 0 lucky denial of mis-justified
0 1 superstition
0 0 sceptic, default position
What feels like a pointer is actually a category. That is, it feels like it points to one, but it points to many. Like both examples given here: https://en.wikipedia.org/wiki/Gettier_problem .
Nothing is absolute.
1=1 is something i know is true cause i know the rules of mathematics. There is no absolute truth to that.
 In America anyways.
Personally I doubt that we’re living in a simulation. But the fact that we could be, demonstrates that we don’t have objective knowledge. No cows needed in the field explain it
Philosophy might better be called “the history of flawed thinking”
I think it's actually worse than this. This scenario suggests that our minds are capable of infallible reasoning yet we may not be able to trust our observations. Really, I don't think we can even trust our own mind, and therefore JTB is undefinable.
But Newtonian mechanics are still extremely valuable, worth discovering and understanding, and anyone catching a baseball is employing them quite proactively. JTBs are the Newtonian mechanics of epistemology. You can pick them apart at a deeper level and show how they don't really exist, but they are still incredibly useful.
To call something 'true' is to know it is 'true'.
If we are bothering to debate whether knowledge is possible, JTB is unconvincing.
Eastern philosophy has nailed this thousands of years ago and we westerners are up to this day totally in the dark. We actively treat the I as a concrete object that really exists as an entity. It does not hold any closer examination and evaporates entirely the closer it is questioned.
That's just part of how our language works. It doesn't seem to matter whether I am a "concrete object" or some swirly pattern of becoming or indeed even an illusion! The English word "I" does not refer to an eternal soul or "atman."
If you stare long enough at an ice cream you'll have the marvelous insight that in reality there is no concrete ice cream entity, not least because it melts. Yet people don't go around saying "wake up, there are no ice creams!" Why is that?
So much philosophy is playing with words.
And while everyone loves to run in circles around the argument “but how can you know with certainty” the fact is that I am as certain that this assumption holds as I am that it provides no value at all to continually question if reality is really real, you’d have to take that as an assumption to ever have any kind of value adding discussion.
The people who insist we can’t know if anything is definately true must agree that they can’t know if that assertion is definitely true, so they sort of kill their own argument axiomatically.
Why deny the word has meaning, just because you can't distill it down to a concise explanation? The meaning of a word can be arbitrarily complex. Knowledge can exist, even if you can't define it, because meaning is not determined by definition. Definitions are simply a mechanism for coordinating understanding, not for demonstrating its existence.
Besides, any argument against the existence of knowledge could be used against belief. Play "taboo" with the subject and don't confine yourself to using ancient terminology to describe the world and these pointless linguistic problems melt away.
Anyway your definition is wanting. A religious scientist has two kinds of clearly different beliefs: faith and knowledge. A mathematician has the same two kinds, under different names: axioms and deductions.
Saying that axioms are the same as deductions is a radical claim.
You can apply that to the real world if you then add social processes and see "knowledge" or "truth" as shared belief between a chosen set of people (or all of humanity). Then you can go down the whole rabbit hole of beliefe aggregation and voting theory.