One of the key points of contention between these two movements was the problem of "innate knowledge". Rationalists believed that we were born with some sorts of knowledge (foundationalism). Leibniz believed that we were born with the basic mathematical knowledge and reasoning. Descartes thought he had a great deal of innate ideas -- perhaps most famously, his proof of the existence of God relied on the fact that he had an innate idea of God.
It was really only until Locke came along that anyone presented a top-down theory attempting to account for the acquisition of basic forms of knowledge (e.g. difference and similarity, truth and falsehood) while maintaining an initial "tabula rasa" state.
I wouldn't lump in the contemporary rationalist movement with either of these previous movements -- it is probably best considered as an extension of logical positivism, owing to heirs like early Wittgenstein.
However, positivism brushed up against many philosophical problems: semantics, the analytic-synthetic dichotomy, etc. Things got worse when Wittgenstein eventually wrote "Philosophical Investigations" which effectively contradicted everything he wrote in the "Tractatus Logico-Philosophicus", the latter of which was the foundational text of logical positivism. The two schools of thought were so different that people generally refer to "Tractatus" and "post-Tractatus" Wittgenstein.
One of the key scientific critics to logical positivism was Karl Popper -- best known for formalizing the ideas of falsifiability -- eschewed the idea of induction altogether. I highly recommend his "The Logic of Scientific Discovery" (which is not to say that I agree with Popper, but he is an important figure in the philosophy of science).
Thomas Kuhn's "The Structure of Scientific Revolutions" calls into question the very idea of there being a "scientific truth" that is independent of historical assumptions. Contemporary heirs of this kind of historically focused thought, like Ian Hacking, try to make room for the ideas of "scientific fact" in some contexts (particle physics) but question it in others (psychology). I highly recommend Hacking's "Making Up People" -- a brilliant collection of essays which is still friendly to people who haven't read a great deal of Foucault (another important figure in recent philosophy of science).
Second to last in my brief and certainly not comprehensive list is Hilary Putnam. It's hard to precisely pin down Putnam's thought -- as a philosopher he is remarkably open to criticism -- he has written papers in response to himself. Putnam has mostly kept himself consistent with scientific realism -- which basically states that scientific theories are usually more-or-less true.. Early on, he was a metaphysical realist who became sharply opposed to the school. Frankly, I don't know where he stands today, but read his essay "Brains in a vat" if you're at all interested in Skepticism.
Finally, this list of alternatives to positivism would be incomplete without Richard Rorty. His landmark text, "Philosophy and the Mirror of Nature", rejects the possibility of an epistemology altogether. I can't recommend Rorty's writings enough -- I believe he stands with Foucault as the most important pair of philosophers in the last 50 years.
I hope that I've given some more background on the rationalist/empiricist debate and contemporary alternatives to proper empiricist and positivist thought. I certainly cannot give you anything comprehensive in the space of a HN comment -- my only goal has been to hint towards various schools of thought on this matter. Pierre Hadot wrote that philosophy is a way of life moreso than a domain of thought -- and I tend to believe that in life, the journey is more important than the destination. In that respect, it is impossible for me to give you a firm answer to the question you asked of the OP: "if not empiricism, then what?" -- but I hope that my comments help you find where you stand.
OK, let's do this.
We have a scientific theory.
From it, we derive some engineering discipline, which uses the theory to, essentially, make predictions about what will happen if we do this to that, with the property that, if the predictions hold true, we'll have something useful.
The people following the engineering discipline create things.
Those things work.
Does that not, then, validate the scientific theory?
And if that scientific theory is validated, does that not knock Kuhn on his ass?
Because the forces which the artifact the engineers created is subject to don't give a rat's ass what our current culture says. They were the same billions of years ago and will be the same billions of years hence, the existence of our species or intelligent life at all notwithstanding.
OK, some fields of science don't make sense without humans to study. Right. But others will still be just as true if we're wiped out and replaced by sapient Corvidae, or not replaced at all.
Kuhn's argument is that science occasionally undergoes what he calls "paradigm shifts" -- a low-level shift in assumptions about a certain realm of scientific thought that fundamentally changes the way we approach a particular field. One example Kuhn gives is the Copernican Revolution. Copernicus, as we all know, proposed the heliocentric solar model. Before Copernicus, most people used Ptolemy's epicycles to model the movement of planetary bodies. Initially, it worked, but the cracks started to show as observations accumulated. A major shift in our assumptions about the organization and modeling of planetary bodies had to occur before scientific progress could move forward.
In this sense, scientific knowledge progresses in giant shifts, rather than linearly or incrementally. Consider the theory of atomism how it was disrupted by the Rutherford Gold Foil experiment, or the Double-Slit experiment and what that did for physics. A dominant paradigm must always make way for a new paradigm in order for scientific progress to occur.
The upshot of all of this, according to Kuhn, is that the criteria for scientific truth are always caught up in certain historical assumptions and that we have to take these assumptions into account when assessing the veracity of a given theory. He doesn't say that there aren't facts about the universe, but rather, that the scientific approach to understanding the universe is caught up in a paradigmatic frame which makes it impossible to derive a simple, objective algorithm/process for scientific discovery.
Does that make sense?
If your historical assumptions are true in a useful way, your science will progress to the point where you have more-useful engineering; if they're wrong in an important way, your science will stall out, or give you the wrong answers. If they're neither, and they don't affect the ultimate utility of your predictions one way or the other, it all becomes a bit academic: Is it even useful to say your theories are wrong if they keep making good predictions and allow scientific progress to be made? Note well that atomism (pre-quantum) and geocentrism eventually stopped making good predictions, stopped being a gateway to more complete theories, or both. (For example, geocentrism is probably impossible to integrate with universal gravitation.)
"Truth" is something mathematics has access to, not physics. Truth-With-A-Capital-T is Absolute, Perfect, Incorruptible, and utterly inconsistent with reality as it is outside of the symbol-games we play in our minds, because Platonism is downright insane.
Therefore, "scientific truth" is contingent, sure, but it's contingent on more than mere fashions. It's contingent on experimentation and experiments don't care if your histories are contingent one way, the other, or the other way entirely. Nobody's histories were contingent enough to imagine the Earth repelled small rocks.
So Kuhn agrees that there is a universe and that there are, at least potentially, facts about that universe humans are capable of discovering. That puts him a few up on some philosophers. However, I don't agree that our criteria for scientific truth is fully entwined with our historical accidents as long as we rely on science to predict what the non-human world is going to do.
And to use current rationalist terminology: any given scientific model has an associated probability estimate for being true, which is very close to but not equal to 1; any work built on top of that model will depend on the truth of the model; and invalidating a model in favor of a new one requires re-evaluating any work based on that model. The "giant shifts" you're referring to occur when a model lower down the stack, with a pile of things built on it, gets invalidated or replaced by a better model.
On a day-to-day basis, you don't typically re-evaluate the validity of Newtonian or relativistic physics. Most of us regularly use Newtonian models despite knowing that they don't exactly match how the universe works. And we know that relativistic models don't exactly match how the universe works either (notably on a very small scale), though we don't have better models yet that work on both small and large scales.
That doesn't invalidate the scientific method
First, the new candidate must seem to resolve some outstanding and generally recognized problem that can be met in no other way. Second, the new paradigm must promise to preserve a relatively large part of the concrete problem solving activity that has accrued to science through its predecessors.
It wasn't so long ago that Einstein declared that God doesn't play dice with the universe. Kuhn doesn't deny the existence of scientific facts or the utility of the scientific method -- he only hopes to illustrate that the notion of scientific truth is contingent on certain assumptions and that these assumptions often get in the way of future progress.
It's also important to remember that Kuhn wrote "The Structure of Scientific Revolutions" back in 1962. At that time, he was largely responding to the logical positivists. While I think a lot of the contemporary rationalist movement is caught up in old logical positivist modes of thought, your own willingness to invalidate models based on evidence wasn't fully developed before Kuhn and Popper brought such thinking into the mainstream.
I don't see how he gets from A to B. How is the fact that we obviously can only build models based on the past - since the future is not accessible to us - prove that it is impossible to refine models to asymptotically approach a hypothetical fundamental truth?
We may not have a formalized algorithm. But whatever is running on human brains has worked so far.
If I boil this down it sounds to me like he's claiming that a system that takes its past states as one of its inputs (observation of the universe being another) is incapable of refining scientific theories? Which, given the fact that's exactly how scientific discovery has been done so far, seems false.
So what am I missing here?
I don't see how he gets from A to B
If I boil this down it sounds to me like he's claiming that a system that takes its past states as one of its inputs (observation of the universe being another) is incapable of refining scientific theories? Which, given the fact that's exactly how scientific discovery has been done so far, seems false.
Now, we didn't have to abandon Newtonian mechanics entirely, but quantum mechanics have replaced Newtonian mechanics in most fields dealing with particles and small numbers of atoms.
Kuhn's argument is that science has advanced, but not through a simple process of "refining". According to Kuhn, science isn't generally a linear or incremental process -- it is a cyclical one in which our existing models cease being useful and we have to find a better one.
I hope that helps! If you're interested, I highly recommend reading the text -- Kuhn was an excellent writer .
First off, the notion that engineering "validates" science. What do you mean by validate? Do you mean successful engineering informed by a set of scientific principles, somehow, in a scientifically rigorous way, renders those principles true? As in, end of story true?
Because most of mechanical engineering is done on the assumption that force equals mass times acceleration. The industrial revolution yielded countless engineering marvels on the back of Newton--cars, trains, breathtaking buildings and bridges. But the theory isn't "true," because Albert Einstein did some thinking and realized that all of the physics change as you go really fast--something "engineering" missed, despite the fact that the stuff "worked."
But somehow it feels wrong to say that Newton was wrong, right? Because in his world, with the kind of thinking and set of scientific instruments available to him, and the battled hardened inverse square phenomena  that could be painstakingly measured and applied in engineering, it was infallible. It was true. But only in historical context.
1) Actually not quite so. The strange orbit of Mercury was out of line with Newton's equations, so much so that the planet Vulcan, placed out of Earth's visible observation, was invented to explain it. And so the theory was saved, until General Relativity explained it away, too. So much for "truth."
I don't deny that it's naive.
> First off, the notion that engineering "validates" science. What do you mean by validate? Do you mean successful engineering informed by a set of scientific principles, somehow, in a scientifically rigorous way, renders those principles true? As in, end of story true?
Nothing is end-of-story true except in mathematics, where we have access to absolute truth by virtue of first having accepted an axiom system as absolutely true within a context and then having accepted some logical rules as being capable of turning one absolute truth into another absolute truth within the same context.
Mathematics is absolute, but it's only valid within the abstract context of that branch of mathematics.
Physics, for example, is only conditionally true, contingent on us finding evidence which refutes a given theory, but it is applicable to the real world.
So engineering provides evidence that the theories we have can predict the behavior of the Universe at least in the context where they're being applied. A theory is only validated in the world in which it is tested. Granted. However, to the extent it is tested and validated, that validation should be accepted as worthwhile, as opposed to being written off as something culturally contingent.
> Because most of mechanical engineering is done on the assumption that force equals mass times acceleration. The industrial revolution yielded countless engineering marvels on the back of Newton--cars, trains, breathtaking buildings and bridges. But the theory isn't "true," because Albert Einstein did some thinking and realized that all of the physics change as you go really fast--something "engineering" missed, despite the fact that the stuff "worked."
Right, and Einstein's predictions about how the acceleration of a massive particle to near light speed would affect its measurement of time were not validated by engineering but experimentation. And it's also true that bridge engineering validates Newton as much as it does Einstein and Dirac, for example, because it operates in a world where all three theories are "valid" in the sense of "if you use them to help make your bridge, they will not cause it to fall down", and it validates whatever ideas the ancient Roman bridge-builders had, at least if the bridge is of a style the Romans made. I grant all of that.
Philosophically, then, we're back to Popper, in that negative results push science forwards, whereas positive results only make us more sure that the ground we're standing on is solid. We shouldn't ignore positive results, though, because the bridge will still stand even after the next paradigm shift; we should further accept all theories as provisionally correct. That much seems fairly mainstream, philosophically speaking.
However, we are moving forwards. We are able to explain more observations than we have been able to in the past. We are not just moving in circles, with each paradigm shift undoing all of our work and sending us back to square one. We learn to make better and better bridges, to bring this back to engineering.
> But somehow it feels wrong to say that Newton was wrong, right? Because in his world, with the kind of thinking and set of scientific instruments available to him, and the battled hardened inverse square phenomena  that could be painstakingly measured and applied in engineering, it was infallible. It was true. But only in historical context.
Newton's laws were always provisional. We now know them to be incomplete, but still useful for human-scale construction, on Earth or in space or on other bodies entirely. They've been subsumed into more modern theories as a special case; they're the equations you observe when you set the paramters to be similar to what humans will experience first-hand. And, as you said, they couldn't explain Mercury, which modern theories can, so they were incomplete even before we had GPS satellites to falsify their predictions. (I mean, they were observably incomplete. Our observation doesn't dictate what reality is; any solipsists can kindly imagine that I don't exist and refrain from communicating with me.)
So engineering does validate theories, but validation isn't enough to winnow theories until you come up with some test some of them fail. That's just Popperian philosophy, though, isn't it? That's just the philosophy of science that all the cool kids are so done with right now, right? My point is that we shouldn't imagine that the validation is worthless, or imagine that it can be undone, because any new theory will have to explain precisely the same behavior as the old one, paradigm shift or no.
Mathematics is absolute, but it's only valid within the abstract context of that branch of mathematics.
In response, mathematicians developed a set of formal axioms (nowadays most people use ZFC, although sometimes Von Neumann–Bernays–Gödel and other variations are used) which produce a mathematical foundation that is consistent (i.e. free of paradoxes/contradictions).
However, as Gödel's Incompleteness Theorems demonstrated, there is no set of foundational axioms which are both consistent (free of contradictions) and complete (all mathematical truths can be deduced by such a system).
So, while it is true that mathematical proofs are formally valid deductions from a set of axioms, it is worth recognizing that the relationship between mathematics and truth are somewhat more complex than they seem. As it stands, there are an infinite number of mathematical statements that cannot be derived by an axiomatic system. Some philosophers have even sought to identify 'quasi-empiricism' in mathematical thought .
And if you find that interesting, you'll love James Conant's paper on Logically Alien Thought .
We (and by we I mean Popperians) want to believe that science is a series of universally positive logical assertions that can be cut down by a single negative observation, as logic would dictate. But we don't always know what the criteria are for successful negative observations. The criteria are less rigid and well defined than we would be willing to admit. They vary from community to community. Robert Milikan won the Nobel prize for measuring the charge of an electron with his brilliant oil drop experiment. Only problem? His measurement was wrong. As folks tried to repeat it, they deviated more and more from his original measurement, until many repetitions and many publications later they landed on the correct value. If you were to plot the "true" measurement for the charge of the electron against time, you would see something deviating very slowly from an arbitrary incorrect value to the correct one. You have to ask, how on earth is this possible? Bias, authority, imprecision over truth criteria—all at play. And I think it's this sociological fuzziness in play in many thousands of small ways that lead us to at least question the assumptions on which truth is founded.
Bayesian reasoning helps here, I think, because people are wrong, and different people are wrong with different probabilities. For example, overturning mass-energy conservation because someone said they saw a professor turn into a small cat or a strange spacecraft appear and disappear is not reasonable: The probability of one person being wrong or insane is a lot higher than the probability of something really well-verified being completely incorrect.
Is it political at times? Yes. Can it be improved? Sure. But it is flawed, not completely broken, and I think Kuhn makes too much of the flawed-ness which encourages people to imagine that it's completely broken and therefore the next paradigm shift will validate homeopathy.
Based on how science actually works, this notion is fanciful. No such table exists. If you were to have the luxury of asking the top physicists, say, to create such a table for you, they'd very likely all look different.
Also, your comment regarding homeopathy is something of a strawman. Paradigms are incommensurate. If we do incur a paradigm shift in our lifetimes, it's likely that our current ways of speaking about science will be unable to capture it.
Newton's physics are useful in engineering; in fact they have been good enough for almost every engineering project mankind has ever undertaken, including complex stuff like landing a robot on a comet. But are they true?
Well, we know they are not complete. There is evidence that Newton can't explain; that's how we ended up with our modern theories of quantum mechanics and general relativity. But are those true?
Because we know we cannot reconcile them with one another. And then there is dark matter and dark energy, which are as yet unexplained, and might comprise about 95% of the known mass/energy of the universe.
More prosaically, think of the last time you saw a bird fly. What's the "truth" there? Your observation, or the theories of quantum mechanics and gravity, which we believe govern the matter and energy of the bird?
In programming we have a concept of "leaky abstractions"--build a layer of abstraction on top an underlying technology, and chances are at some point, someone will have to descend to the underlying technology to fix a bug or find an optimization.
What if our entire history of scientific observation is just a collection of leaky abstractions? We have no way of telling in advance when we've reached the bottom. So the theory we think might be "true" today, might turn out to be a leaky abstraction tomorrow.
Edit to add: Cultural and historical assumptions raise their heads when we find the holes in the abstractions and are trying to explain them. In the absence of reliable theories, or in the face of seemingly incompatible observations, we just try to sort of apply what we already know.
Kuhn's idea is not that scientific knowledge might be wrong. It's that human beings might be wrong when they think they know what the truth, or reality.
Einstein said of quantum mechanics, "God does not play dice," and he spent years trying to prove that the universe is as deterministic as he believed it to be. That's what the voice of historical assumptions sounds like.
We actually use logic along with induction to get a lot of theories. The fact that it works is proved by the scientific method, not the other way around.
The scientific revolutions that Kuhn was talking about often don't make the old model entirely invalid, but rather an approximation that works reasonable well in a limited domain. So the old model is not entirely true, but it's sorta true. If you're the sort of person who wants to say that theories are either true or false, it's an edge case that's not easily handled.
I've certainly never heard of the two names you dropped.
Here's a key passage: "Observational vocabulary is not a vocabulary one could use though one used no other. Non-inferential reports of the results of observation do not form an autonomous stratum of language. In particular, when we look at what one must do to count as making a non-inferential report, we see that that is not a practice one could engage in except in the context of inferential practices of using those observations as premises from which to draw inferential conclusions, as reasons for making judgments and undertaking commitments that are not themselves observations. The contribution to this argument of Sellars’s inferential functionalism about semantics lies in underwriting the claim that for any judgment, claim, or belief to be cognitively, conceptually, or epistemically significant, for it to be a potential bit of knowledge or evidence, to be a sapient state or status, it must be able to play a distinctive role in reasoning: it must be able to serve as a reason for further judgments, claims, or beliefs, hence as a premise from which they can be inferred. That role in reasoning, in particular, what those judgments, claims, or beliefs can serve as reasons or evidence for, is an essential, and not just an accidental component of their having the semantic content that they do."
EDIT2: This might be a more helpful explanation. Elsewhere, Brandom elaborates on this view by redefining beliefs as commitments. A commitment entails some other commitments, and is mutually incompatible with others. For example, if I believe that I'm eating an apple, this entails the belief that I'm eating a fruit. It's mutually incompatible with the belief that I'm eating a squid. It also entails that I believe things I might not even know about the apple, such as the chemical formulas for the various sugars I'm eating. The point is, if I play the game of reasoning correctly--mostly by dispensing with mutually incompatible commitments when they arise--others (e.g. botanists, chemists) can hold me accountable for all the correct beliefs about the apple. The "content" of the apple, for me, is nothing other than the series of moves I make in this game.
EDIT1: You should also read rpedroso's comment, which does a great job exploring the many alternatives that arose after empiricism reached its impasse.
2: Not only empiricists, but possibly Kant on a very straightforward reading--although Brandom has a very different reading of Kant.
Isn't this name taken by a somewhat different philosophy?
> this movement appears to be a kind of hyper-empiricism informed by cognitive psychology, AI research, and Bayesian statistics.
Opening intro from wiki for Logical Positivism asserts,
> Logical positivism and logical empiricism, which together formed neopositivism, was a movement in Western philosophy that embraced verificationism, an approach that sought to legitimize philosophical discourse on a basis shared with the best examples of empirical sciences.
Stephen Bond had a neat essay, perhaps a bit dubious on the AI/declarative side, about this exact topic. He writes a bit acerbically, but read at least past "For a long time I accepted this explanation at face value...".
> The reluctant scion of a domineering steel tycoon, one of the wealthiest men in Europe, Wittgenstein spent his childhood hob-nobbing with the highest of imperial Viennese high society — and society doesn't get much higher than that. Groomed from an early age to take over his father's industrial empire, Wittgenstein instead fucked off to England at the earliest opportunity, to devote his life to the study of the most uncommercial and practically useless subject he could find. Russell proved an ideal mentor, and Predicate Logic an ideal subject.
Logical positivism is a philosophical movement which maintains that statements are only meaningful if they can be formally derived or empirically verified.
In that respect, I think that logical positivism and the kind of empiricist approach identified by the OP have a significant amount in common. Similarly, I think the approaches share many of the same problems.
In my (admittedly extremely limited) exposure to him as a person through videos of his talks and some of his transhumanist writing, I didn't really get the impression that he's anything but a very nice and gifted person. So I think this practice of judging or dismissing entire lives by applying a simplistic theme is an antipattern, and this is true here, too. There's a reason why ad hominem attacks are generally frowned-upon.
Yes, I personally found the certainty with which several conclusions are asserted as universal truths on LessWrong in general offputting at times, especially when used in conjunction with the Rationality label, insidiously implying any other analyses of the subject matter would be inherently irrational.
However, some subjectively bogus tenets notwithstanding, I still think it's a valiant and intellectually stimulating attempt at building a new philosophical framework which could potentially keep up with science and future human development. At the very least LessWrong Rationality is a good basis for an ongoing discourse on the subject, and at its best it demonstrates a unique exploration of ethics and, indeed, rationality.
You may argue most or even all of this framework is lifted from earlier philosophical and scientific achievements, but in some areas standing on the shoulders of giants is actually a good sign you're in the right place.
The good bits are not original and the original bits are not good.
The problem is that LessWrong has a habit of neologism - so EY will use his own term for something ("fallacy of gray" is one example - known for 2000 years as the "continuum fallacy"), then his young readers, who have met whatever it is for the first time ever, will think his work is much more original and significant than it is 'cos they can't find his term for it. This cuts them off from 2000 years of thinking on the topic and increases LW's halo effect.
I personally don't care one bit if the good bits aren't original. They are approachable, and nobody else has done that for me. So I applaud Eliezer and his efforts, regardless of whether or not he has broken ground philosophically.
Furthermore, you wouldn't learn anything beyond the limits of Yudkowsky's knowledge, or - more importantly - that there was anything beyond those limits.
The habit of neologism makes stuff impossible to look up, and creates the illusion that this is new ground, not old, and that there isn't already a world out there.
su3su2u1, debating this matter with Scott Alexander (Yvain), sums up a lot of their problems with the world view (which I am as familiar with as anyone who doesn't actually drink the Kool-Aid can be, having been on LW around four years and read not only the Sequences through twice but read literally all of LessWrong through from the beginning twice), which I largely agree with as a summary: https://storify.com/lacusaestatis/sssp-su3su2u1-debate
I'll quote one telling bit, which points out the level after Bayes:
> Heck, there are well defined problems where using subjective probability isn’t the best way to handle the idea of “belief”- when faced with sensor data problems that have unquantified (or unquantifiable) uncertainty the CS community overwhelmingly chooses Dempster-Shafter theory, not Bayes/subjective probabilities.
Do you remember the Sequences post mentioning the words "Dempster-Shafter"? Me neither.
(And then there's the use of "Bayesian" to mean things that nobody else uses the term for. As su3su2u1 puts it: "I suspect I’d be hard-pressed to write about probability theory in a way that wouldn’t fit some idea you cover by the word 'Bayesian.'")
Yudkowsky definitely gets credit as a good pop science writer. The habit of neologism, not so much. And definitely if he wasn't into the encapsulated, self-referential world that LW builds. In philosophy, Yudkowsky is the quintessential Expert Beginner: http://www.daedtech.com/tag/expert-beginner
untiltheseashallfreethem notes in http://untiltheseashallfreethem.tumblr.com/post/107159098431... : "I think Eliezer did a great service in writing these ideas up. But they are not his ideas, and I’m really worried that a lot of people read LessWrong, see that Eliezer is right about this stuff, assume he came up with it all, and then go on to believe everything else he says." And that's a serious problem when the good stuff is not original, and the original stuff is not good.
On the contrary, he seems very intent on citing the very books and people from which he learned these things. At least in my more limited experience. You definitely seem to have studied up on the issue much more than I.
The number in question was 3^^^3 using Knuth's Up Arrow Notation. As he explains, that's a lot of people: (3^(3^(3^(... 7625597484987 times ...)))).
If we decide that one person being tortured for 50 years is worth a quick blink by all those other people, then a still tiny amount of people (say, 7625597484987) being tortured would be worth everyone else blinking furiously for their entire lives.
So it's not about deciding one person's fate. It's about being consistent, because if you make an exception for one person, that turns into several hundred billion, then that small sacrifice everyone else was to carry now has destroyed everyone else, too.
(I think the issue is that people tend to round down "speck of dust" to 0, then multiply, then find of course any amount of torture isn't worth 0.)
Personally, I take a more negative view and see the immense possibility for suffering as a reason that we should seek to destroy the entire multiverse. It's just not very nice otherwise.
As for the part of the argument that dust specks in so many eyes would cause the death of a small fraction of the people, that's far from a given but for the sake of "steelmanning" let's assume that's true. At some point my own ethics place death as a higher utility outcome than prolonged suffering. How many deaths vs how much suffering? That's hard to quantify.
Why not round it down to 0?
There's some level of discomfort in everyday life to start with that's effectively negligible for any normal person, simply because it's a prerequisite for interacting with the world (getting a raindrop in your eye, dealing with a minor wedgie, having an itchy nose, whatever).
That's why you can't round down first. Eliezer's entire intent there is to illustrate that by default, we suck at scale, we're scope insensitive.
Another way: If everyone in the US gave just a penny directly to a poor person, once in a lifetime, that person would be wealthy for life and be OK, right? A penny rounds down to zero, thus we can determine that the right action is to always give money directly to poor individuals, as this will cure poverty at a cost of nothing.
You're right that if the only choice, ever, in the entire multiverse was this one 50 years or 3^^^3 specks of dust, then sure, you don't need to follow the rules/math. But we're never going to be faced with just a single decision. Instead, we should be consistent and follow the numbers to determine utility.
Your argument here is based on the premise that the subjectively-inflicted-pain of "a whole lot of specks of dust in a person's eye" can be treated as a multiple of "one speck of dust in a person's eye", which to me is the point of absurdity that makes the whole thought exercise fall apart.
(It's a bit like when someone says something absurd about Uber or Bitcoin and, when called out, says "But that's Econ 101!" Yes, but in second and third year you find out it's all a bit more complicated than that.)
That some arbitrarily minimal amount of subjective pain can be presumed to be actually distinguishable from the general background unpleasantness of being human in the first place.
Edit: Rephrased to remove gibberish resulting from temporary brain/keyboard disconnect.
How can pain be objective? The universe doesn't give a shit that some lumpy sacks of meat dislike certain experiences, and exactly the same action can be unpleasant for some people but pleasant for others. (BDSM enthusiasts, for example, often actively seek out experiences that other people would consider painful.)
Do you object that a pin prick is bad? Do you want to avoid being pricked?
That depends on the context. If it's acupuncture, for example, it can be quite good, even if the precise sensation is exactly the same as the otherwise unpleasant case of poking myself with a needle while sewing.
Also, minmaxing is a thing.
7 trillion people tortured for 50 years vs 3^^^3 people with 7 trillion dust specks.
Does that make the flaw more obvious? You can't round before multiplying. What am I misunderstanding?
You're dodging the core of the problem, as stated above: "However, when pushed too far, those tools tend to break down—but the rationalist answer to that breakdown is all too often to embrace the model and discount the reality." https://news.ycombinator.com/item?id=9204442
Just because you love your model doesn't make its conclusions at extremes true.
That doesn't mean everything he writes stands up, but he's smart enough that you have to think a while about what might be wrong with his arguments, and that's entertainment enough.
He's a great writer, and that should be more than enough for the purposes of recommending a thing he wrote.
I think his writing has two particular things going for it:
- He explains concepts clearly with understandable examples.
- He presents concepts that aren't (or weren't) well known outside of academia to a wider audience.
One that does come to mind is:
alexanderwales writes relatively short, punchy stories that explore specific academic and narrative themes, and, importantly, generally work extremely very well as stories even if you discount the thought experiment aspects.
I don't think that this was the kind of criticism that was expressed, but on the other hand it's always easier to fight a straw man.
There is a bit of assertion that "timeless physics" is representative of true reality, but my reaction was to look it up and find that it's an interesting fringe theory. For all I know, EY doesn't even assert that same theory's truth. Nothing harmful in sparking curiosity, especially in things such as statistics and logical biases, which is where HPMOR puts its emphasis.
Warning: The central idea in today's post is taken seriously by serious physicists; but it is not experimentally proven and is not taught as standard physics.
Today's post draws heavily on the work of the physicist Julian Barbour, and contains diagrams stolen and/or modified from his book "The End of Time". However, some of the arguments here are of my own devising, and Barbour might(?) not agree with them.
su3su2u1 (who is an actual physicist) discusses the problem in http://su3su2u1.tumblr.com/post/98526012853/chapter-28-hacki... :
> There is no “true math of quantum mechanics.” [...] These are different mathematical formulations, over different spaces, that are completely equivalent.
> What Hariezer is doing here isn’t separating the map and the territory, its reifying one particular map (configuration space)!
>I also find it amusing, in a physics elitist sort of way (sorry for the condescension) that Yudkowsky picks non-relativistic quantum mechanics as the final, ultimate reality. Instead of describing or even mentioning quantum field theory, which is the most low-level theory we (we being science) know of, Yudkowsky picks non-relativistic quantum mechanics, the most low-level theory HE knows.
> So this is more bad pedagogy: timeless physics isn’t even a map, it's the idea of a map. [...] It seems very odd to just toss in a somewhat obscure idea as the pinnacle of physics.
Terry Pratchett was a good writer too. (Probably even better.) And that's all you need to be.
Yudkowsky has, like old Goethe had, grander aspirations. We humour them for their literary output.
When you notice Harry doing something dumb, oblivious, overconfident, condescending, etc. do not assume that this is the author's personality leaking through. It may be the intended reading.
I guess you could say I found it a bit hamfisted.
Also, read this: http://lesswrong.com/lw/k9r/cognitive_biases_due_to_a_narcis... It's a perfectly normal literary analysis of HPJEV as a narcissist and/or raised by narcissists. It's not Ph.D-quality rigour, but it does back its claims. Note the amazing special pleading in the comments - fans outraged someone would dare analyse their favourite thing in less than glowing terms.
They seriously think they can get this thing a Hugo, somehow evading all artistic critique along the way.
Some are, in fact, bad and insufferable on their own merits--much as HP is in this.
The community seems more than happy to leave it in that place rather than put it to the test by publishing something like that or attempting to make money from that.
As soon as someone tries to make money from it, even if just to recoup self-publishing costs there will be a lawsuit and it'll be decided concretely, probably not in the community's favor.
It continued and - finally ! - finished.
PS: Fellow readers, did it ever stop being awesome and/or become unpleasant from the perspective of the original plot ?
On a lighter note, I was hoping right to the end that the last transfiguration would end up producing a house elf, ideally Dobby. :D
Eliezer, if you're reading this: thanks. It was awesome.
I'm joking of course, but that and your comment are equally smart ways of evaluating other people's deeds and ideas.
For the record, while I wont deny that I've had more to drink than I could handle in the past, I've always made it to the toilet in time to avoid any public embarrassment.
It's not in the past.
I'm in agreement regarding the absurdity and group think of Less Wrong, for reasons that aren't that relevant to this thread. That said, HPMOR is a fairly entertaining read, once you can get past the feeling that you're having an ideology forced on you through Harry's point of view. The book reminds me of things Ayn Rand has written in that sense.
(Wow. Am I missing something about status games that make the above open lie a good move for David Gerard? He's done it repeatedly, too. I'm confused, how is this a good thing from his perspective? Am I playing into his hands somehow by calling him on it each time?)
For the claim of inaccuracies: that article is cited to the phrase level for good reason. If you're claiming inaccuracies, you're going to need actual details of why the cites are wrong.
For comparison, `Harry Potter and the Natural 20' is a better piece, because the author just wants us to have a good time, and is not constantly trying to prove how smart he is. (HPMOR reminded me a bit about Freakonomics in that respect. But Freakonomics is way worse.)
HPMOR could do with some radical editing, perhaps?
As a book that is literally about negotiating your way out of terminal nuclear war, I've also found it an excellent practical guide to raising a toddler.
I read his later books, too. They are good, but not as great.
Schelling wrote well, I am not sure about his actual effect on policy. There were other hawks, like von Neumann, another geeks' hero.
Among other things, I've often seen people try to sell HPMOR as, "Harry Potter fanfiction in which Harry applies the scientific method to the magical world of Harry Potter!" And that shows up a little [^1] but is by and large not the content of the work. Harry does very few experiments to verify his hypotheses and is actually a broadly incurious character. For example: Harry is nominally interested in eliminating death, but never once investigates the many magical mechanisms which seem to eliminate death, preferring to just talk about it instead. Similarly, he comes up with hypotheses as to how magic works, but never bothers to investigate them beyond speculation.
Additionally, a lot of the "solutions" to problems Harry has are relatively unsatisfying: Harry often circumvents a problem by "clever" applications of the rules, but because the rules of magic as given both in Rowling and in Yudkowsky are ill-defined and even self-contradictory, this seems less clever and more like arbitrary author fiat. Harry is also given a small time machine early on in the plot, and so more often than not, the solution to his problems is, "Use the time-turner again."
Finally, I personally found the writing style to be generally in need of editing—not terrible but certainly not polished—but I consider that to be a smaller problem than the above plot-related issues. EDIT: All this doesn't mean that you shouldn't like it! These are just problems that I had with the work.
[^1]: The bit where Harry tries to experimentally prove that P=NP is probably my favorite part of the entire thing.
The thing is... HPMOR isn't about the scientific method. It's about rationality. It's not a story about distilling a universal theory of magic or triumphing over death; it's a story about a kid who's had a rationalist education fumbling his way through a world that refuses to actually make sense.
Perhaps also germane is that I am not personally a believer in Less Wrong-style rationality, and so the intellectual content of the work was not relevant to me except in the detached way that philosophical or political or religious schools can be interesting to non-adherents (which is one reason I read as much as I did.) Whether the story accomplishes its actual goal, then, is something I can't judge, but I can say that, as a non-rationalist (irrationalist?) I didn't find it to be a good or engaging story.
But as I said in my previous comment, this is my own reaction, and I include it for informative reasons, not because I think others should necessarily share it!
That said, it is rare that a work of fanfiction manages to be a legitimate deviation and self-contained work that both comments and reflects upon its originator usefully. Generally speaking, I prefer to give all fiction the benefit of the doubt: whatever its failures, it is hard to ignore that it was successfully written and as someone who has tried my hand at it, I know firsthand the challenge overcome. There's a "Man in the Arena" quote insertion that belongs here. I did find it to be a good and engaging story, though, if painfully in need of an editor, which is not a unique remark in the land of fanfiction.
If you want to really have fun, compare HPMOR with the Left Behind series, which qualifies as fanfiction as much as anything. There are probably some fascinating parallels to be found with their relationship to their original works and author intentions and the like. You can find a much higher quality counterpart to su3su2u1 in Fred Clark's Slacktivist blog, which dissects it on a page-by-hilarious-page basis.
My personal experience with Less Wrong-style rationalism, to simplify the situation aggressively, is that it has a core of good, useful tools (Bayesian reasoning, strict positivism, utilitarianism) that I have no problem with. However, when pushed too far, those tools tend to break down—but the rationalist answer to that breakdown is all too often to embrace the model and discount the reality. This general refusal to regard their core tools with suspicion results in beliefs which are paradoxically irrational: when faced with e.g., utilitarianism condoning torture to prevent mild discomfort, the rationalist response is not, "Perhaps human experience does not map straightforwardly to integers—we should re-examine our tools," but rather, "As our mathematical tools are of course correct, we must believe in this conclusion." This belief in math-over-matter is a major part (though not the only factor) in my skepticism towards the kind of rationalism promoted by Less Wrong.
The strongest argument against rationalist utilitarianism seems to be that people don't like the cognitive dissonance it imposes on hypocrites.
And calling people hypocrites because things like involuntary organ donation gives them "cognitive dissonance" is absurd. Also regarding hypocrisy, note that Yudkowski works for an organization whose goal is to protect humanity from Skynet, while people in Africa are starving. (I'm not personally attacking him, he can work on whatever he likes, and I think governments of wealthy nations have both the resources and responsibility to alleviate starvation, and individual efforts are mostly pissing in the wind. But parent poster brought up sweatshops &c. so I went there.)
I'm unwilling to post spoilers, which makes this unfairly undiscussable, but the meaning of the original prophecy did not--could not--come down to rationalism and could not for reasons that were repeated a few times through the fic. I can't tell if this was intentional on Yudkowsky's part, and I don't really care, since it really changes nothing about everything else.
For the record, though, when I disparage utilitarianism, I promise it isn't just "Eliezer's".
No, they've said "this is a reductio ad absurdum, perhaps reality is a bit more complicated than that." You don't get to assert your assumption - that simplistic utilitarianism works as advertised - as evidence.
Consider, for instance, minmaxing as an alternative. (Like real-life AI work tends to.) What answer does that give?
At times I was wondering if the author had any familiarity with the material beyond a wikipedia (or tv tropes) summary.
Take one in common with the Harry Potter franchise — consider that "J.K." Rowling's publisher asked her to hide her gender.  Obviously there's many absurdities in such a system. Unicorns and horcruxes are perfectly fine — but god forbid you have a lead character who's (say) a black girl. Whites and males can't be expected to empathize with her!
I can enjoy a product while acutely aware of such things, since our backwards culture leaves little else to conveniently enjoy otherwise. But decided not to go to the local Wrap Party, given: the sort of people most likely to know about HPMOR, lesswrong.com posts, and what I researched of the local organizers. Better things to do with my day.