Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
On Post-Modernist Philosophy of Science (2000) (uwgb.edu)
69 points by iamjeff on Aug 7, 2016 | hide | past | favorite | 70 comments



> She: I've heard that the main thing is to avoid relativism. But I'm a physicist, and that presents a real difficulty. Without relativity there'd be no possibility of making measurements and we'd each be prisoners, to all eternity, in some single point of view. In my discipline, we need the relativity of frames of reference in order even to begin work. I have a special need for relativity because I work on events close to the Big Bang. You don't need relativity, too?

I think it's safe to say that, with this paragraph, Bruno Latour failed the Ideological Turing Test: this conflation of philosophical relativism with the physical theory of relativity is not something that would ever be uttered by someone who knew what the latter is.

It gets worse from there. Please, anyone who's tempted to write a dialogue with a fictional person holding an opposing viewpoint: talk to one first! Find out what their arguments really are! And, if possible, learn their views well enough that your summary would seem accurate to one of them.


> Please, anyone who's tempted to write a dialogue with a fictional person holding an opposing viewpoint: talk to one first! Find out what their arguments really are! And, if possible, learn their views well enough that your summary would seem accurate to one of them.

Amen. It might also prevent the deliberately insulting language that Latour adopted. Clearly, he holds the scientific camp in contempt. An actual dialog might have demonstrated to him that there was more there than ignorance and mistaken ideas.


The discussion near the end of Philip Scranton was the most interesting and coherent part of the article for me, while earlier Dutch is mostly just incredulous about the words used by Latour and Derrida.

I think he makes a good point about a mutual misunderstanding:

> The history of science as presented in science texts, especially older ones, is rightly unsatisfactory to sociologists. In the interests of providing students with a heuristic framework (frequently a historical approach is the best way to explain a complex concept) and a sense of historical orientation, the accounts were streamlined to the point where they presented a highly linear view of science devoid of false starts, blind alleys, and personality clashes. The reason textbooks do it perhaps a tad better than they used to, by the way, is partly due to the insights of sociologists.

> Sociologists, on the other hand, need to realize that the way they present the history of science can seem just as distorted. However honorable their intent, their language seems at times to deny the existence of objective knowledge.


The language used in sociology does deny the existence of objective knowledge, for excellent reasons that sociologists are happy to explain. It's a way to manage a massive problem.

https://web.archive.org/web/20160529025038/http://www.cardif...

We might know objectively that matter is made of atoms, and sociologists know that too; but Dalton, Boyle and Lavoisier didn't know it. It's easy to tell a story about what they did and why they did it, and the story will end with everyone finding out that matter is made of atoms. But, in order for the beginning of that story to be an objective explanation of what people did, it has to make sense whether or not matter turns out to be made of atoms.

Causality runs forwards in time. Therefore, atoms were discovered as a result of the actions of people who didn't know about atoms, and atoms did not directly affect the actions that led to their discovery.

If atoms caused the discovery of atoms, it would have happened ten billion years ago, not two hundred.


What is actually found to be true can play an important role in understand past attempts. Imagine one sends a group of ships on exploration with each ship given a partial and inaccurate map. One can use an accurate map (along with the ship maps) to understand how the various ships meet difficulties and have to change course(not finding expected land in the inaccurate map). Sociology of the decisions in the ships would be incomplete without an accurate map as success/failure affects the social process in the ship and this success depends on accuracy of ship map as well as the social forces (like two conflicting groups in the ship, some who want to get gold, some who are more inclined towards safety).

See an old article posted here on discovery of an explanation for scurvy https://news.ycombinator.com/item?id=1174912. Note that a correct explanation can help us to understand the partial success of past theories and also why they dont work fully. It is also important to understand the dynamics of the social groups and who is more likely to succeed.

Your link is a good post, but "does deny" is too strong, a better expression of the attitude is "bracketing/setting-aside the issue itself and studying the social processes involved in both sides of the argument, how it influences which arguments are accepted". Note that scientists and programmers often make these types of arguments as well. For instance, a scientist can say that funding committees are biased as they represent the old guard. "Worse is Better" is an argument which uses sociology kind of arguments to explain popularity of programming languages, though a sociologist probably wouldn't get into objective assessments like the popular languages are worse.

BTW, I dont agree with the main post that philosophy is bunk. Also, I think that there is construction in science, but one has to be clear at what level the construction happens. There are strong internal constraints which dont allow one to arbitarily use a different Periodic Table for instance.


But, in order for the beginning of that story to be an objective explanation of what people did, it has to make sense whether or not matter turns out to be made of atoms.

I'm afraid I don't follow. If the basic laws of chemistry and molecular composition were different, we wouldn't have heard of Dalton, Boyle, and Lavoisier. Instead, we'd refer to people who happened to choose fruitful lines of inquiry in that universe.


I think I understand what you and the Collins article are saying, but I don't agree that this denies the existence of objective knowledge. Rather, it is unconcerned with objective knowledge beyond its purview of understanding social pressures, even as it deals with scientists seeking objective knowledge.

Maybe I just don't properly understand the whole science wars thing. It seems so obvious as to be uncontroversial that science is a thing people do and can be studied as such, but this is probably an unhelpful strawman (steelman?) I've substituted for what's really being argued. Or maybe this all died out in the 00s.


Thanks for the link to the Collins article; I found it very interesting.

I have some questions about your argument that the atoms didn't cause their discovery, but I don't think I have time to formulate them clearly right now, so I'll just stop with thanking you for the link. :-)


Here is an exercise: draft your account of how atoms caused the discovery of atoms, then imagine how I would rewrite it as an account of how the luminiferous aether caused the Michelson-Morley experiment.

My account is obviously bullshit, because the luminiferous aether doesn't exist. So, if your story is not bullshit, there must be a specific causal link in my story that doesn't have an analog in yours. Find it.


This seems suspicious to me, like you're playing a game with words that I'm not able to precisely identify.

Isn't this conflating objective reality (which we can only theorize about) with objective knowledge about reality (which can improve over time but never be perfect)?

The theory of the luminiferous aether "caused" the Michelson-Morley experiment, the outcome of which was inconsistent with the luminiferous aether actually existing. The theory of atoms "caused" many experiments, the outcome of which was consistent with atoms actually existing.

What causes theories to exist? Conjecture and criticism motivated by logical and empirical problems. The problems are "caused" by objective reality, of which atoms are one part (at a certain level of explanation, anyway).

An account of the development of scientific knowledge should explain the development of these problems, the conjectured solutions, and criticism made to them, and why the current explanations are the best available in light of this history. At no point do we need to claim that the current explanations are the best possible, but they are still objective knowledge.


> The theory of ...

This is starting to sound like history or sociology. The next step comes when you notice that the "the theory of" keys on your typewriter are getting excessively worn, and adopt a convention that any reference to "atoms" actually means "the atomic theory of matter".

There's a convention in scientific reasoning, that I'm not allowed to say, "Einstein believed that spacetime is curved", although in fact he did. This is a safety measure: if we forbid people from saying that, we don't lose much, but we avoid some common ways of fooling ourselves.

Sociologists are coming from the other side. They are allowed to say, "because Einstein believed that spacetime is curved", but they can't say "... because spacetime is curved." The readers know that spacetime is curved, and it's tempting for them to forget they are not the people whose actions they want to explain, and those people acted as if spacetime were flat. Perhaps they noticed things that were inconsistent with spacetime being flat. They never directly noticed that spacetime was curved: if it were possible to notice that directly, no one would ever have believed otherwise.


Atoms being atoms, they exhibit stable properties that lead to known behaviors XYZ etc, so scientists can exist, and a few of these are to date the only known discoverers of atoms.


> I have some questions about your argument that the atoms didn't cause their discovery

Like what is the person that discovered them made up of, if not atoms?


That can't be the cause: all of the people who didn't discover atoms were made up of atoms too.


> If atoms caused the discovery of atoms, it would have happened ten billion years ago, not two hundred.

What were those scientists made of, exactly?


The sociologists struggle with the level of confidence portrayed by most scientists. They may all today claim X=1, but the sociologists see that yesterday they claimed that X=2. That confidence, in light of conflicting history comes of as arrogance.

Scientists feel they need to portray absolute confidence because they know septics will latch onto anything to justify their views (see climate change). The problem is that disinterested parties, the sociologists, identify this conflict as a problem that scientists seem to ignore. Should the general public better understand science scientists could possibly shed the absolute confidence act, mooting the problem.

It's like talking to an ER doctor. The doc knows he is only 80% sure of a diagnosis, but the patient needs to see absolute confidence. They are the ones going under the knife. Should the patient better appreciate the science, the doc might be more honest. But given the average patient, the public, the act is necessity.


Having met with some of the "Science and Technology Studies" students and floored by the ignorance of basic technical facts by some of the people they quoted to be correct, I believe this postmodernist trend arose out of post-cold war technological superiority.

With Science and Technology no longer needed to fight against a powerful enemy, it was easy to devalue it. However times are changing.

A society can only ignore science and facts in favor of meaningless word salad as long as their security does not depends on it.


I'll go ahead and confess that I haven't read the whole article, but I think it will still be valuable to relay that as a former philosophy grad student (though not a philosopher of science), my first response was "who the fuck is Andrew Pickering?"

Don't think that the initial quote is the norm in philosophy.


Strangely, Pickering's Wikipedia photo instantly reminded me of Guillermo Gómez-Peña. Here is the latter drinking a bottle of pepper sauce: https://www.youtube.com/watch?v=fIfAk-guplA.


Well, this is old newspaper. The Science Wars were a product of the post-'60s worldview, in which good-natives-versus-wicked-colonisers was more or less the overarching theme and rationality was suspect; they quickly died the death in the revised, post-9/11 master narrative of Enlightenment-versus-superstitious-darkness, which wasn't fertile ground for even modest and non-radical skepticism about The Science. A classic document of this is the "Why Has Critique Run out of Steam?" paper, in which Bruno Latour scrambles to get with the just-modified program: note the 2003/4 date. http://www.bruno-latour.fr/sites/default/files/89-CRITICAL-I... Of course we'll have to see how things develop in future, now that the post-2001 frame of reference is already fading in people's minds.


>In a century that produced Stalin, Hitler, Pol Pot, Idi Amin and Slobo Milosevic we still see philosophers and sociologists seeking the roots of evil in externals like family violence, poverty, television, even circumcision and lack of breast-feeding (and no, I am not making those up!).

Whereas the correct answer is what? Some disturbed individuals like Stalin, Hitler, Pol Pot etc are the cause? Or was it "philosophy" that perverted them -- because Hitler sure as hell didn't read much philosophy.

And what about the millions of victims of atrocities from the "civilized" colonial powers not mentioned in this list, because they're third world so nobody cares, and b) they kill the nice argument about "pure evil" leaders motivated by philosophy. Those crimes were motivated by the almighty buck.

>Calling twentieth-century philosophy superficial gives it too much dignity; vacuous is the closest term.

Yeah, man, you dismissed it in a sentence. How intellectual. And coincidentally, how anglo-saxon, the very culture that never understood continental philosophy to begin with, and deals mainly in scientism and crude empiricist platitudes.


> Or was it "philosophy" that perverted them -- because Hitler sure as hell didn't read much philosophy.

Unfortunately, he read enough Nietzche to become dangerous. (Or perhaps he was dangerous anyway, and read enough Nietzche to provide a smoke screen...)


Jacques Derrida is the Picasso of bullshit artistry.


Could be -- I know some so-called "post-modern" philosophers to be meaningful, but I know very little about Derrida.

Still, where professor Dutch starts sounding like a crackpot is when he strawmans Latour and Derrida into a crusade against the purity of Scientific Objectivity. No deconstructionist I know of has ever picketed a chemical lab; "green activists" have done that -- and to throw christian young-earth creationism and Velikovsky on the lap of mr. Derrida, well, that just sounds like asking someone whether they have stopped beating their wife already.

I do wonder if the rise of this scientistic reactionaryism (not the self-satisfied belief in science, but the need to go out and write lengthy rants on subjects they know nothing about) has to do with the stagnation of fundamental physics since the 1970s. I mean -- professor Dutch sounds like he was promises a priesthood and then denied access to God.


He makes sense within a certain discourse. Whether you think that whole discourse is bullshit, I think is what's at hand.


Also see, Dawkins review of Alan Sokal's book,

http://www.physics.nyu.edu/sokal/dawkins.html


Counterpoint: http://retractionwatch.com/

(Edit: now -- this, from the very bottom of the article, is disturbing:

"If you don't understand why science has a valid claim to objective knowledge, and why undermining the belief in objective reality is dangerous not just to science but to society at large, don't disturb those of us who do.")


Retractions aren't an counterpoint against the article inasmuch as a result of the scientific process. Science is bound to get things wrong, and in a larger, grander scheme of scientific processes judging science by its retractions is akin to judging someone for their childhood actions. Science develops, learns, and moves forwards, and while that doesn't render it impervious to criticism, it should be impervious to those who aren't involved in it to a significant degree. Framed in that manner, I don't think the end of the article is disturbing.


I will agree that retractions are a mark of honesty on some of the actors involved in "Science". What they are not is the "sign of contradiction" of catholic faith. Honesty is not enough to claim a priesthood of the fashion he wants.

This becomes even more complicated as we leave the holy sciences of physics, chemistry and biology. We are told that the popperian method of posing falsifiable hypotheses is the only way "forward" and we attempt to apply it to the poorly defined questions of areas such as cognitive psychology -- and then become startled or blame p-values when we find ourselves in the middle of a replicability crisis where any given paper may as well be false.

Because physics has such a hug on its unyielding realities, we assume homogeneous "Science" is able to claim an "objective", extra-social -- and ironically unverifiable, as we can't arrive at right answers from wrong questions -- command of "knowledge". But that's not a true view of the real world, unyielding as it is to our space-age dreams.


You're completely right on that the generalization of science as an objective process onto itself is unfounded at best and dangerous as worst, but isn't that a sign of growing pains in science? As science progresses, both in the easily quantifiable and more abstract fields, we should expect an eventual progression towards a further, better measures for fields such as cognitive psychology. It took physics hundreds of years to be verifiable on a large degree, and we shouldn't hold relatively newer fields to the same standard, and scientist have to start somewhere. Unto that goal, isn't it more dangerous to stifle and stall newer fields because they're experiencing growing pains?


> isn't that a sign of growing pains in science?

Could be that; or could be that scienticians are investigating the wrong questions altogether in these massively nonreplicable fields.

Face it: science needs to learn to (1) be open to criticism (2) become well-acquainted with the critique of science (as it's expected to be well acquainted with statistical methods now) and (3) help the from-outside-science "science studies" fields grow rather than nipping them in their bud for their still-infant understanding of the meat (rather than the processes) of science.

Much like no one should be allowed to go into research with the poor grasp of probability theory that "p-value imagineers" display, no one should go into research while imagining they're being inducted into a priesthood of objective thinking.

Maybe Derrida is a confidence trickster, but it's not him who's "stifling" science. If anything, the Neil deGrasse Tysons of the world are stifling open societal discussion of where fundamental physics is actually going (which is -- nowhere for about 40 years now) and what "scientific studies" as portrayed by daytime TV really mean.

Scientific idealism isn't doing society any favors and isn't doing science any favors. Maybe it's doing favors to cynical tenure-track professors who need to massively oversell the meaning and potential impact of their work, but that's neither here nor there: graft and rent-seeking are as old as humanity.


There is a valid argument that the postmodernists bring up that the author of this article (and science in general) seems to be ignoring. This is certainly obfuscated by the postmodernist tendency to misuse terminology all over the place, so it's best to use a specific example to illustrate it.

Recently, as I'm sure everyone here knows, the LHC found the Higgs boson. Imagine an exchange between EP, an ecstatic physicist, and SP, a skeptical postmodernist.

EP: We did it, we found the Higgs boson!

SP: Why should I care?

EP: Well, knowing that the Higgs boson means that we finally know that the Standard Model is correct.

SP: That's cool, I guess. So you saw the Higgs boson with your eyes?

EP: No, that's impossible.

SP: How do you know it exists?

EP: Well, there's this cool thing in the LHC called ATLAS. We get beams of protons to collide in the middle of it, and the resulting products set off little signals inside the detector. By processing the results of these signals, we can figure out what byproducts must have been involved...

SP: How can you find that out?

EP: Embarks on particle physics that goes over SP's head

SP: This is the Standard Model you've talked about?

EP: Uh, yes...

SP: So, what you're telling me is that you've just verified the Standard Model by assuming that it was true in the first place?

EP: ...

SP: Walks away in smug satisfaction

What the postmodernist gets right in the argument is that our evaluation of the validity of scientific tools to make observation is based on a model of how science works, and that, without care, you risk creating a circular argument that boils down to proving models correct because you assumed them. What they get wrong is the assumption that this circular argument is unavoidable, and that thus the entire edifice of science is built on shaky ground in the first place (ultimately being simplified down to "there is no such thing as objective reality"). Ironically, what the Sokal exposed is that the postmodernists were just as guilty of the crime they accused scientists of committing: they didn't criticize that which would have proved them correct.

The message for scientists should be that you should always strive to make sure that the tools are working properly. If there were a serious bug in the code used in ATLAS, do you think that the scientists on the project would have been equally diligent in finding and fixing that bug if it made a Higgs boson appear where it didn't actually occur than if it hid the appearance of the Higgs boson?


> If there were a serious bug in the code used in ATLAS, do you think that the scientists on the project would have been equally diligent in finding and fixing that bug if it made a Higgs boson appear where it didn't actually occur than if it hid the appearance of the Higgs boson?

Actually, the scientists trick themselves by injecting fake signals to avoid bias on their part. So they have that covered. They're well aware at how badly one can goof up when looking for a really rare signal. The statistical standards they set are incredibly high.

The more completely scientists are able to control the experimental conditions, the better science works. Things like psychology end up probing an extremely complex system (our brains) with a lot of constraints on what they can do for obvious, ethical reasons. Add in a side of politicization of certain results and you end up with a larger amount of replication problems than you would in particle physics where we can carefully isolate effects on significantly simpler systems.


My line of questioning isn't even there. I mean, the standards of "proof" (empirical validation) of physicists are pretty good.

But ask any particle physicist and they'll tell you the Standard Model is a hack. It's almost as if it belonged to the same family as epicycles.

Moreover -- ask any particle physicist and they'll tell you the empirical work related to LHC is mostly unrelated to the last 40 years of "post-modern" theorizing in physics.

In other words: I don't think philosophy should be as concerned with the truth of things (which, well, can't be determined by abstract speculation), but with relevance and significance. Science (as opposed to technology -- and saying technology is powered by science is like saying it's powered by testosterone) is failing pretty hard in remaining relevant. Thus the attacks on religion, linguistics, french philosophy and so on.


I don't think this circularity exists or plays any substantial role. Skeptics and some philosophers seem to have a hard time understanding how dreadfully difficult it is to come up with theory of anything that is analytically insightful and makes even remotely correct predictions.

I would be fine if someone came up with a theory of, say, electromagnetism based on assuming the existence of the flying spaghetti monster or, to stick to the topic, based on assuming that our reality is mostly constructed by culture. The only thing I ask for is that those claims actually play a role in the theory, rather than being a mere addition, and that you can you can actually use the theory to build a transistor radio.


> I would be fine if someone came up with a theory of, say, electromagnetism based on assuming the existence of the flying spaghetti monster or, to stick to the topic, based on assuming that our reality is mostly constructed by culture. The only thing I ask for is that those claims actually play a role in the theory, rather than being a mere addition, and that you can you can actually use the theory to build a transistor radio.

I cannot upvote this more. It seems all too easy to criticize scientific methods as being baseless or assuming some odd construction about the universe or the context or whatever it's embedded in, but it's so easy to overlook the fact that there are only so many (read: extremely few) ways one can construct a theory with the same predictive power as that of physics which has the same or less number of free variables. Thank you for putting into words something I've tried so hard to verbalize to some of my friends who please themselves with criticizing positivism/post-positivism as being woefully naive (relative to, say, constructivism), yet benefit infinitely from it.


> So, what you're telling me is that you've just verified the Standard Model by assuming that it was true in the first place?

No, we've just verified one part of the Standard Model to be true by assuming the other parts of the Standard Model, which we were already able to test, to be true. That isn't actually circular.


From Wikipedia on the Science Wars:

>> "dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives."


maybe it's a formatting issue in my browser, but it was less than clear when it was the author talking, versus when he was quoting something. i had to bail. am i not giving the paper a fair shake?


I was about to say that this guy is about 15-20 years late to the party with the polemic, but noticed from the footer that this essay is in fact 16 years old. Maybe someone could add a (2000) to the title?

For more background on the period this essay came out of: https://en.wikipedia.org/wiki/Science_wars


The weird thing here is that if everything is social construct, then science as social construct is just as true as it would be as objective reality. Then you can just pick science as your favorite construct and call it a day.

But for some reason "social constructionist" seem to be often incredibly biased. Some things are "false" because they are constructs, but some things are "true" because they are constructs. And some things are just objectively horrible, while others are constructed as horrible.

Examples: rape culture is social construct, but mental distress from rape is suddenly not. Nationalism is "false" social construct, but Marxism is "true" social construct.


I think you're conflating quite a few different people there. I mean, there probably are some people who simultaneously hold all those views, but most don't. For example many people doing social studies of science do find science interesting, which is partly why they spend so much time studying people who follow that methodology— they just don't think it's a methodology that is independent of the human culture that it arose from, and are particularly interested in how its social practices developed (even if the scientists themselves don't like to see what they do as "social practices"). Bruno Latour goes in this category, as something of a 'science-o-philic' social constructionist (although he's also a "soft" constructionist, because he views both humans and non-human entities, like lab apparatus and materials, as jointly doing the construction, vs. it being a purely cultural phenomenon). And generally (most) social constructionists and (most) Marxists don't see eye-to-eye, precisely because most Marxists do think there is a scientific, objective method (historical materialism) for analyzing social relations, history, economies, etc., and orthodox Marxists even tend to believe objective factors ultimately determine culture rather than the reverse.


The spesific dude/dudette who invented the "nationalism is modern social construct and pretty false" is in my understanding quite left leaning. People who say this stuff out loud in real life (all three of them) are very left leaning. Being completely opposed to nationalism makes sense in very few instances, and comitern sympathies are one.


Yes, I had updated it to reflect the dates, but must have made a mistake in submission. I could rectify that in short order.

*edit: silly me, looks like I can't edit it. Perhaps someone with more karma/mod privileges could help out.


[flagged]


It's a form of culture jamming to disrupt the cognitive processes by which we accept things as "truth" or "reality".

Can you express this a bit more clearly? "Culture jamming" is not a broadly understood term. It's not clear why the words "truth" and "reality" are quoted. Additionally, it seems odd to combine a specific term like "cognitive processes" with a ultra-broad term like "things".

If there is an argument for postmodernism being something more than word salad, your comment does not make it well.


You can read more here about culture jamming:

https://en.wikipedia.org/wiki/Culture_jamming

As an example of the problem he's addressing, look closely at your constructions like "is not a broadly understood term", "it's not clear", "it seems odd", and "your comment does not make it well".

These are all your personal opinions, and yet you've cast them in the language of objective fact, of truth. You assert without evidence that your view is identical with reality. Others reading you could take on those fact-shaped opinions as actual fact, especially if you have high social standing.

So by talking about "truth" or "reality", I believe he refers to the jointly held, socially constructed opinions that people mistake for fact.

An obvious political example of that is the way the various US state declarations of secession during the civil war assert that black people are naturally inferior, fit only to be slaves. To them it was experienced as truth.

There are also plenty of examples in the sciences; look at any major paradigm shift and you can see "truth" diverging from truth, "reality" diverging from reality. This is why Planck wrote, "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."


> A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it

It's going to be great once AI exists and we can point more concretely to something that doesn't work the way you describe (ie a purely evidence driven epistemology).

While we wait for that, I'll just point out that the pace of technological progress we're seeing in society -- faster than a cycle per generation -- indicates that technical theories don't work the way you claim. Math theorems don't become "true" because the opponents die.

There would be something deeply wrong with a theory that requires ignorance of the old ideas for someone to accept it. If science actually worked that way it would be no better than the humanities.


Physics theories are of course true (or not) regardless. But their "truth" (that is, the socially agreed set of things broadly agreed upon as true) does definitely vary as people die. Relativity has always been true, but it gradually became "true" in the first half of the 20th century. Luminiferous aether theory has always been false, but it was "true" for hundreds of years before.

And science really does work that way, which is exactly Planck's point. And Kuhn's, of course. It's a human enterprise, an essentially social one.

I also think that you're overfocused on Kuhn's words. He was being modestly hyperbolic. People are modestly capable of relearning, but the ability declines with time and they're better at it for marginal learning than foundational change.

Technology is not a counterexample. I've been coding for 30+ years now, and my dad started coding 50 years ago. I work very hard to keep up, but it's easier for someone new because they don't have to unlearn anything. They don't have to reconcile new data with a vast amount of old data.

A lot of technological progress happens because our field has been continuously expanding for decades, providing a flood of new people who seize upon the latest trends. And we work in a commercial context that heavily rewards innovation. Most major tech companies were founded by people who were young. There's a reason for that.


I think relativity is a red herring in this discussion because it was so famously hard to prove. That a theory which took on the order of a generation to satisfactorily prove required a generation for mass acceptance isn't in my opinion evidence for Kuhn's hypothesis, or particularly noteworthy. If you look at the different (though also foundational) example of Watson and Crick's discovery of DNA you won't see skepticism from the old guard but rather excitement. This is because the discovery, though revolutionary, was easily proved -- every cell has DNA, as can be verified by anyone once they are told how to look for it.

More broadly I think you will find that for every relativity-like-theory that was slow on the uptake (which is to say difficult to prove), there are also 10-100 promising theories which were discarded... and that the very real risk that a theory could be wrong is the principal reason for the eventually-winning-theories' slow uptake among scientists.

(Incidentally while double checking this critique and my DNA example, I found out it's one of the more common critiques of Khun's work. See http://plato.stanford.edu/entries/thomas-kuhn/#6.1)

With that out of the way, let's take a closer look at your claim that technological progress isn't a counterexample. Your point that the expansion of new people into tech should count as a new generations is well received, and I think a good and interesting one... but you also admit you yourself have changed paradigms in your lifetime. Does that not count as you putting yourself forward as a counterexample, and agreeing with respect to tech more generally?

I do agree that as I've gotten older I grumble a bit more when I have to learn a new way of thinking about something I'm already familiar with... but when it can be shown concretely that the new way is better (for example the results from deep learning) I do spend the time to relearn. This reticence seems more than enough to account of the data Kuhn is using, so don't see why a fancier hypothesis involving me (and more broadly everyone) secretly refusing to give up on lesser ideas is needed.


DNA doesn't strike me as a good example. I don't think it was a paradigm shift. Crick and Watson didn't discover it; they just showed how this particular molecule fit well into people's expectations for what was going on.

As to this:

> you also admit you yourself have changed paradigms in your lifetime

I don't know that I have, really. Sure, some things have changed. But I'm still writing OO code that isn't that different than what I was writing in the late 1980s. I still build systems on Unix-ish OSes on collections of discrete servers. The major difference is that the servers are virtual, but that's hardly a difference.

As an industry, the phrase "virtual server" is a sign we're still struggling to make a paradigm shift. It's like "radio with pictures" or "horseless carriage". But look at how much hate the possible alternatives, like containerization or serverless computing get. And that pattern of hate is a common thing in technology. A large proportion of people just won't use anything new unless circumstances force them. [1]

> Does that not count as you putting yourself forward as a counterexample, and agreeing with respect to tech more generally?

No, because nobody is claiming that people never change. The notion is that they change more slowly than a completely rational actor would, especially when social status is on the line. The actual speed depends on a variety of factors. Planck exaggerated for rhetorical effect.

> so don't see why a fancier hypothesis involving me (and more broadly everyone) secretly refusing to give up on lesser ideas is needed.

I don't think that's the right question to look at.

The pattern of people holding on to old ideas because they're comfortable or socially beneficial is pervasive. For example, consider this graph:

http://content.gallup.com/origin/gallupinc/GallupSpaces/Prod...

The change there is very close to the death rate. Or look at the way religions change.

I think question with science is, "Is it essentially different than almost anything else people do?" And I think the answer there is no. Science is somewhat better due to having real data. But it's still a social enterprise among people embedded in status-driven primate dominance hierarchies. This leads to results like the issues surrounding the measurement of the mass of the electron:

https://en.wikipedia.org/wiki/Oil_drop_experiment#Millikan.2...

That's easily explained if you treat science as another human social activity, but hard to explain otherwise.

[1] https://en.wikipedia.org/wiki/Technology_adoption_life_cycle


> DNA doesn't strike me as a good example. I don't think it was a paradigm shift.

> I don't know that I have, really. Sure, some things have changed. But I'm still writing OO code that isn't that different than what I was writing in the late 1980s.

Hmm, okay, I think the issue we're hitting here is something like "no true paradigm shift" -- I would have thought that the introduction of, say, the world wide web in the 1990s would count as a paradigm shift with respect to technology. Perhaps it is incremental? That you have experienced no paradigm shifts working in tech since the 1980s (or at least none you have adopted) seems like a surprising claim.

> Crick and Watson didn't discover it; they just showed how this particular molecule fit well into people's expectations for what was going on.

With respect to Watson and Crick I have to admit I only have surface knowledge of the history of science here. I can say that googling for "Watson Crick discovery" does show a bunch of pages discussing a discovery, many of which seem to think of it as a paradigm shift.

> The notion is that they change more slowly than a completely rational actor would, especially when social status is on the line. The actual speed depends on a variety of factors. Planck exaggerated for rhetorical effect.

I agree with this. Inferential differences in humans has been experimentally demonstrated in the Cognitive Biases literature (psychology, not sociology).

> I think question with science is, "Is it essentially different than almost anything else people do?" And I think the answer there is no.

This is a place that we disagree then, although you may (perhaps rightly) come back and claim I'm taking a "no true scientist" position. To me the remarkable thing about science is how radically it differs from normal human cognition. The desire to submit ideas to falsification, and discard them in the face of data is not a very natural idea for humans, at least judging by history.

> it's still a social enterprise among people embedded in status-driven primate dominance hierarchies.

And here's the bit where you can claim I'm no-true-scientisting: I think much of good science is about subverting the status-hierarchy. This is why you're linking material on electron charge (which requires a stunning amount of agreement on physics to be of interest). If science and scientists behaved like the rest of society, it seems to me we'd still be dealing with the question of atoms existing. For another more concrete difference, willfully falsifying results isn't always grounds for dismissal in other professions (it mainly depends who you falsified them to). That's not true for science.

Which is to say I agree with you broadly ("yes, science is done by scientists who live in a social hierarchy"), but I disagree that this is a particularly useful insight -- if you had tremendous amounts of experience on other professions operating in a social hierarchy your predictions of scientists would be poor.

> [link] That's easily explained if you treat science as another human social activity, but hard to explain otherwise.

"A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule." - https://doingbayesiandataanalysis.blogspot.com/2011/07/horse...

Which is to say I don't think the principal feature of that story is that scientists didn't want to embarrass themselves or others, but rather that there was a real possibility that their experimental apparatus was faulty. The Feynmann quote you linked to doesn't even believe they were doing it for status reasons, but rather something akin to the the https://en.wikipedia.org/wiki/Streetlight_effect


As to the web, I would hesitate to call it a paradigm shift in technology. For me, a paradigm shift requires going from an existing dominant paradigm (that is, overarching conceptual framework) to a new dominant paradigm. E.g., the Copernican Revolution.

The web was a paradigm shift for, say, newspaper publishers. It totally upturned their world. But from a technology perspective, it was pretty straightforward. It created new possibilities, but I'd call it a new frontier. The day before, we were writing daemons to output text over a network socket; we did the same thing the day after. Likewise, we were showing people text and images and getting them to enter data in forms. Indeed, I think the rapid spread of the web was only possible because it wasn't a paradigm shift.

I'd say a better example of a tech paradigm shift would be from mainframes to personal computers. Or from physical servers to whatever thing comes after virtual servers. Or from isolated computers to networked computers.

As to Crick and Watson, they did discover something, but I don't think some people on the Internet saying it's a paradigm shift means that it meets Kuhn's criteria for a paradigm shift.

I agree that the scientific method is important and valuable, but disagree that the social enterprise of science is therefore essentially different. Humans have always been social primates with modest empirical tendencies. The (social) mechanisms of science turn the knobs a bit away from "social primate" and toward "empirical", but it's a difference of degree, not of kind. It is still a social enterprise. We're still status-oriented primates.

As an example, look at the story of Barry Marshall. Sure, he eventually got the Nobel. But he endured enormous resistance because his opinions did not accord with those of the people with power in his field. There is no way to estimate the number of people who we've never heard of because they were not as stubborn as Marshall, but I'm sure it's not zero.

Finally, I disagree on your interpretation of Feynman's story. Humans are, like all their cousin species, intensely status-focused. They published wrong numbers because they didn't want to be wrong in public. "Wrong" here being defined not by actual factual correctness, but by social conformance. They were looking under a streetlight, but they all picked the same streetlight through a social process, not one imposed by the scientific method.

We totally agree on the scientific ideal. But I think it's vital to acknowledge the divergence between the ideal and actual practice, and to study the causes of that divergence.


> The web was a paradigm shift for, say, newspaper publishers. It totally upturned their world. But from a technology perspective, it was pretty straightforward. [...] The day before, we were writing daemons to output text over a network socket; we did the same thing the day after.

Again, I think we have a case of "no true paradigm shift" on our hands. My inner you says the same thing about the newspaper business actually: "the day before we were writing news stories and selling ads; we did the same thing the day after." Of course, in the technology case the kinds of software we were writing drastically shifted, which algorithms were important shifted, etc .. but that's also analogously true for news .. the beats and topics changed in value.

I'll go so far as to call this the central problem of sociology; the terms are vague enough that theories built on them can be bent to explain anything. Medicine has this problem too, in that tons of diseases are essentially catch-alls for unexplained pain, discomfort, or inflammation. Chemistry, Physics, or Math not so much. Psychology sometimes. I do think sociology has some value, but not more value than say science fiction or poetry (which also allows readers to think personally interesting thoughts that work with personal/non-objective categories).

^ for the record that's not an original idea. The https://en.wikipedia.org/wiki/Logical_positivism folk beat me to it.

> Humans have always been social primates with modest empirical tendencies. The (social) mechanisms of science turn the knobs a bit away from "social primate" and toward "empirical", but it's a difference of degree, not of kind. It is still a social enterprise. We're still status-oriented primates.

To be concrete, consider the "medicine" offered by https://en.wikipedia.org/wiki/Traditional_Chinese_medicine and then compare with actual medicine. If you'd like to call that a difference in degree that's your right I suppose... but from an external results perspective they appear to me a difference in kind. Maybe that's splitting hairs? Is a permanent marker different from a whiteboard marker by degree or kind? I dunno, I guess maybe I see the glass as 90% full rather than 10% empty here.

I do know that a dataset on the history of TCM would perform extremely poorly at predicting the progress that Chemistry has enjoyed... however a dataset on the history of Physics would do rather well. That seems like a difference of kind to me. Absolutely a difference of category from a ML clustering perspective.

> I disagree on your interpretation of Feynman's story. They published wrong numbers because they didn't want to be wrong in public.

As I'm sure you know from the link, Feynman disagrees with you here:

> When they got a number that was too high above Millikan's, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan's value they didn't look so hard

Fortunately for us, your claim can actually be tested. We just need to see if scientists care about being wrong when the error won't be public. They do. As evidence I #include Neuton's massive catalog of unpublished works, and Feynman's book title "The Pleasure of Finding Things Out". Much of science is an intensely single-player puzzle game. You play because it's fun, not to show off your high score.

I do agree that no one enjoys being wrong in public -- we spellcheck our work for others, not for ourselves, etc. I just think the effect size is waaaaay smaller here than most areas of human endeavor (somewhat higher, but on the order of the status effects that can be seen in crossword puzzle solving behavior). I think Feynman also believes this, as evidenced by the above quote, and his autobiography more generally.

> We totally agree on the scientific ideal. But I think it's vital to acknowledge the divergence between the ideal and actual practice, and to study the causes of that divergence.

I do have a category of "science pretenders" that I use to explain essentially non-scientists that someone has accidentally given tenure to. If you just ignore these people you can still do just fine as far as predicting the future of science goes in my experience and opinion. They don't publish anything interesting.

This is what I was meaning when I earlier freely admitted to being guilty of playing "no true scientist" -- if some university, say, adds a fashion sciences department, that won't make the people employed there scientists, or fashion sciences a science. It will be individuals applying scientific thinking (ie puzzle solving type thought). Because it all depends on small groups of puzzle solvers, that's where the modeling data is. Contrast with non-sciences where the personal tastes of big players need to be modeled, or worse yet other people's guesses as to what good personal taste of the big player is.

As you move to softer sciences I suppose I can see an argument needing to study primate behavior more, I guess I just don't see it for the hard sciences, at least if you're selected a good group to work with. Hmm... maybe not needing large grants to do you research is also important there too. So essentially I think it's important "for non-scientific purposes related to science".


Ok. I don't think it's my job to argue you into understanding Kuhnian paradigms as distinct from other uses of the word. I also believe you continue to misunderstand what's going on with both the oil-drop experiment and with Feynman's take on it, but I don't see more words from me helping there either, but I'm happy to agree to disagree.


I will say that sociologists are better than normal at making model disagreements sound like the other person's fault. Perhaps my understanding of a Kuhnian paradigm shift is wrong? It doesn't feel wrong, and I've read a fair deal on it in my early college days.

I never got the chance to talk with Kuhn himself, but I'd imagine if we could bring him back and include him here we'd have a third notion of what he was talking about -- and all the models would be consistent with the text he wrote -- not a bad thing in my view, rather like differing opinions on a poem's meaning.

Take care and thanks for chatting.


Thank you for the link. I suspect we agree more than we disagree. It's good to be reminded that we can only falsify theories, not prove them. It's also good to check assumptions from time to time - your example of slavery is very apt.

I am not, however, going to modify the way I speak and write to emphasize that I am speaking my own opinions, with my own assumptions. That should be obvious.


You should certainly speak however you please. But there are three big reasons reason I often explicitly mention when something is my opinion. One, it signals to others that I am not one of those people who confuses my opinion with objective fact. Two, it creates conversational room for others to express differing opinions without the discomfort of interpersonal conflict. And three, it helps me remember that my map is not the territory, that my opinions really are just imperfect opinions.


Layperson here, I'll do my best to parse:

> People assume that postmodernism is an attempt by the left to weaponize rhetoric for their own ends.

Some people say postmodernism is a way of giving intellectual superiority to left wing political rhetoric

> But the right has already weaponized rhetoric.

They started it (presumably with religion)

> Postmodernism is more like "defense against the dark arts".

Postmodernism is the bullshit we use against their bullshit.

> Yes, much of the work of e.g. Derrida sounds like long-form trolling, because it is.

U mad bro?

> It's a form of culture jamming to disrupt the cognitive processes by which we accept things as

Derrida is psychological warfare.

> "truth" or "reality"

Postmodernists make for shitty programmers


> Postmodernists make for shitty programmers

Honestly? I get that people are skeptical and dismissive of postmodernism, but this is just petty tribalism with no basis in fact. Not every programmer is a staunch right-libertarian even though that's the stereotype. Maybe it would surprise you to meet programmers well versed in the humanities that respect or even identify with postmodernism. Human ideologies are absurd, postmodernism is just the investigation of how absurd they are.

Here's how to apply postmodernism to science in order to improve science: it helps you identify and analyze the power structures and processes that regulate and sustain the actual act of doing science. If there are systemic biases or failures within those structures they will impact the quality of the science you're doing. There is "objective" knowledge in science, but there's also absolute garbage, and there's no ideological argument to tell them apart.

Ideology influences the way we observe and interpret evidence. It even shapes the kind of experiments we run, and the kind of hypotheses we propose. It won't make you misinterpret a thermometer, but science is scarcely ever as simple as aiming a sensor at a piece of spacetime and reading the output.


Nowadays postmodernism is mostly used to portray someone and then deploy a critique. It is a rhetorical offensive tool.

I understand your need to defend left-leaning coders and to understand the ideological underpinnings of modern science, but I suggest you update your conceptual framework, just as you should be wary using an unsupported browser.


Update it to what, I wonder?


"Objectivity", "rationalism", "freedom", or some other totally non-ideological word, I'm sure.


My point is that I know no contemporary scholar claiming to be postmodernist and I am not aware of recent efforts to build on this theory. I might err, as it is impossible to read everything.

What I have witnessed is that people use postmodernism as a strawman, usually in a reactionary stance. This article is no exception, even if it brings some interesting points debated in other comments.

As a serious answer to a somewhat rhetorical question: Some have suggested realism and I find their endeavor interesting. This book is one good example.

http://www.hup.harvard.edu/catalog.php?isbn=9780674004122&co...


That's probably because postmodernism isn't a theory. It's a very broad term that encompasses many, many responses to Modernism and the politics of that era.

Critical theory is alive and well, especially in law. I do love me some good critical theory since it's effective application can dismantle most ideologies that refuse to acknowledge their basis in violence. I don't think anyone is a "critical theorist" but it's certainly a tool many scholars use.

Postmodernism had a lot of fruits. Some of them are still perfectly juicy and nutritious and some turned out to be bitter and indigestible. Certainly, I would prefer to discuss the weirdest pomo over the vulgar scientism that's popular around here :)


I don't have much opinion on the rest of your translation, but I think this gets it wrong:

> Postmodernists make for shitty programmers

To me one of the big differences with postmodernism is a deep skepticism for grand overarching theories and perfect understandings. I think that's an immensely useful trait for a programmer.

The more I code, the more I treat any digital representation as a deeply distorted image of the reality I'm dealing with. The best I can do is have the distortions match my user's purpose. As Box wrote, "All models are wrong; some models are useful." The modernist continues to seek the perfect model. The postmodernist treats perfection as a fool's errand and ships something that works well enough.

I have seen this problem over and over in my career; a lot of architecture astronautics strikes me as rooted in modernism. But perhaps a good example that everybody can look at is early concepts of AI vs today's machine learning. When I compare Cyc [1], which has spent 30 years trying to build the seeds of perfect knowledge versus today's deep learning, that strikes me as a postmodern shift.

How does a convolutional neural network recognize a dog? We don't exactly know, not in a way that makes much sense to us. [2] But we don't care. The convnet has its own very bounded, limited version of "truth"; as long as we get a 90% match, we ship it and go out for beers.

[1] https://en.wikipedia.org/wiki/Cyc

[2] https://blog.keras.io/how-convolutional-neural-networks-see-...


>Postmodernists make for shitty programmers

But yes. post modern word salad is pretty damn meaningless most of the time. Unless there is a oscillation between an objective-proven framework and the relativist nonsense wonders of the subjective getting lost in tulpa-creation is fairly easy. There needs to be checks and bounds, or at least something to contrast against. Metamodernity evolved out of the past years of postmodern headwankery and the absolute ridginess of modernity before hand. Balance is key, see BlackBox, Ronkko, Turner, Shia Labeouf, etc. Metamodernism [1] [2]

The compiler is the arbiter of truth, error messages the concrete we come to upon smelling of rye.

[1] https://www.youtube.com/watch?v=2EVb8bwuM54

[2] https://theblackbox.ca/api


That's a postmodern way to say 'wrong', right?


>Postmodernism is more like "defense against the dark arts". Yes, much of the work of e.g. Derrida sounds like long-form trolling, because it is. It's a form of culture jamming to disrupt the cognitive processes by which we accept things as "truth" or "reality".

You mean: it's a cheap and riskless replacement for actually struggling against capitalism via a materialist analysis of the conditions of the working class.


Indeed, because Derrida, after all, moves culture towards liminal sublimity. By discoursing on the always-becoming nature of the sign, he aided decolonisation of the scientific palimpsest.


Article with such a title doesn't have a mention of transhumanism. I'm sorry I don't have time for this.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: