At the risk of going "no true scottsman", I hate this kind of anti-intellectual nonsense. The methods of science, engineering and rational reasoning are the best tools we have for solving world problems. They were literally invented for it. They're reified applied reasoning. You eschew them at your own peril.
Now of course to solve problems involving humans, you have to factor in the human element - the complexity of our individual emotions, and of the societal interplay. This is where a lot of attempts at solving problems fail, and it's appropriate to criticize that. But that doesn't give you a free pass to go by your gut in contexts our monkey brains have never experienced in the past. This means we need to double-down and carefully, rationally, find out what works and what doesn't.
I think this is where a lot of attempts at defining problems fail. I don't think this has any impact on the validity of the scientific method of problem solving, once you've clearly defined your problem.
How do you feel about the fact that was is rational and irrational (indeed our whole meta-rational basis) has changed in time according not to new discoveries but merely the movement of the cultural zeitgeist? What about the differences between instrumental reason (and rationality), technological rationality in which modern technology changes what we see as rational (see Marcuse) and rationality of ends, rather than just the means?
My point is that we can't simply say "rationality" as if it's a fixed tool we can use so easily, at least, we can't use it without also considering the current ideological context in which we are using it. Surely you admit there are different rationalities, such as economic rationality, which would seem to eschew any human factor that does not count towards profit efficiency?
Or, in other words, the principle of "garbage in, garbage out" applies.
But if your claim is that rationality is independent of culture (which, as I've already noted, is rubbish, since the idea of what is rational and irrational changes with cultural zeitgeist, unless you have some claim to objective rationality which I don't) then we can't use a static methods of rationality to judge the goodness of a particular goal, which is obviously context-dependent. So if not rationality, then what? I suggest critical thinking in place.
That's what I was trying to say in my previous comment. Even if you execute rational reasoning perfectly, if you start with garbage input, you end up with garbage output. If you intrinsically value something horrible, e.g. murdering kittens, rationality isn't going to tell you to not do it; on the contrary, it'll suggest the most effective means of doing that.
> But if your claim is that rationality is independent of culture (which, as I've already noted, is rubbish, since the idea of what is rational and irrational changes with cultural zeitgeist, unless you have some claim to objective rationality which I don't)
I meant it in the same way in which 2+2=4 or F=ma are independent of culture. Of course the work of discovering rational reasoning was happening over long time, with important contributions in Ancient Greece and earlier. But we've only recently managed to get some good formal tools applicable to real-life problems of reasoning, namely probability theory.
> then we can't use a static methods of rationality to judge the goodness of a particular goal, which is obviously context-dependent. So if not rationality, then what? I suggest critical thinking in place.
What do you suggest this "critical thinking" is made of? As for judging the goodness of a particular goal - you can decompose that into parts that can be judged rationally on their effectiveness (and consistency), and parts that have to be judged morally.
To your point, I now believe all design problems are human problems, which sometimes involve math and science.
I read Neal Postman's Technopoly and Donald Norman's Design of Everyday Things around the time I was figuring out that computers weren't delivering on their promises in the areas I care about (CAD for A/E/C, education), and in some cases made things worse. Validation for what I was experiencing, observing.
I credit Postman and Donald Norman for my conversion from technophile to humanist.
I don't have anything to say about rationality. I gave up (forfeited) after the 2016 USA elections.
One thing that I feel is always missed in discussions of consequentialism are second order effects which are what make decisions untenable. While the classic "murder a stranger for organs" might technically save more lives it causes all sorts of nasty ripple effects - people would rightfully become more paranoid or take measures to ensure uselessness of their organs after death for sheer strategic spite. Taking a stand for principles is still possible in that framework - the goal being to make bad actions more expensive or good actions cheaper.
Still there are certainly good points about needing to choose how one wants to shape society while setting goals - as well as recognizing that society doesn't go as planned or projected and it changes in response to your actions.
Found myself in a perplexing discussion not long ago that-after a lot of words that seemed to operate similarly to what you're describing here.
The topic was over banning plastic bags versus something with more politically loaded outcomes like Brexit, though. The argument the other person brought up was to equivocate banning plastic bags with banning plastic straws and how this constituted a real harm to the disabled community and those who may have a valid need for straws--forcing me to ask if a better outcome for someone with a physical impairment would be to promote durable, fabric bags that are less like to suffer to sudden structural failure (overfilling a plastic bag that rips open unexpectedly), and where the logical limit was in comparing the harm of removing plastic straws and removing plastic bags.
They clung to the comparison that banning bags was equivalent to robbing the physically disabled from needed resources but couldn't really articulate any position beyond that emotional appeal to a community who-I conceded-should be considered more than they probably are. Comparing bags to straws on the grounds presented didn't feel like it was very outcome oriented, though.
It's an endlessly interesting phenomenon of thought to watch take place in real time.
There's a very famous fragment of Polish romantic ballad "Romantyczność" by Adam Mickiewicz:
Feeling and faith stronger speaks to me,
than the eye and the glass of a wise man.
When Poland got independent in 1918 (mostly thanks to a good luck and WW1, just like other countries in the region) - this attitude claimed the success ("if not for failed uprisings we wouldn't be here"), and people believed it. Failed uprisings are celebrated to this day, and a few rational generals who wanted to prevent the useless massacre and were hanged because of that - are still considered traitors.
Then it caused governments of interwar Poland to pursue unrealistic and opportunistic strategy that resulted in 1/6th of the population and 1/7th of the territory being lost in WW2 despite supposedly being on the winning side. But it sure felt nice to be brave and be the "first to fight". People are still boasting "Poland - first to fight" like it's a good thing to be stupid and die for no reason.
This national romanticism is still very much defining the public debate in Poland together with the only mainstream alternative - positivism and pragmatism. And romanticism is still winning - 200 years later. We've only got +-25 years of pragmatism after 1989, but it's over now.
It's why populists can win elections - because people want to ignore reality and stop analyzing it. "Just do what feels right, it will be OK for sure." "Winning trade wars is so easy". Everything is easy if you ignore reality because it's too complex.
It's a very harmful attitude. Don't let it take over your culture, it's very hard to get back to the enlightenment once you leave it behind.
Even if we don't understand everything scientifically now, even some things that we feel, does not mean that we cannot try to do so in the future. I accept faith only as a step towards scientific understanding.
Let us know when you find formal proofs for human values or systems of morality. That's almost certainly Nobel Prize-worthy.
Edit: forgot to put in time limits
The various disciplines that generally rely on "formal proofs" are referred to as "formal sciences" (logic, mathematics, statistics, etc.), but are technically not actual "sciences" since they are fundamentally abstract (as opposed to how we defined science above) - which is why they are generally concerned with formal proofs and not empirical evidence. Of course the formal sciences frequently provide the natural sciences and social sciences with ways to describe the physical/natural world and the social world, respectively.
Science is about building and refining models (theories) in order to make them match observable reality as close as possible, in order to use those models to predict what happens in the future - both by itself, and in consequence of us poking stuff. That's what it means to know "how something works".
Engineering takes these models and adds a "what's the best way to poke things to achieve a desired outcome?" aspect.
Formal proofs are for mathematicians. Mathematics is a purely abstract invention and operates in its own universe, where absolute formal proofs are possible.
In your defense, science appears to follow formality, though we run into often run into holes in our theory when a confounding number of variables are in play.
Me, personally, I don't know what that difference really is. Science is science because it works. If it didn't work, it wouldn't be science.
Mathematics can and will find ways to apply itself to the real world, no matter how approximately, as long as that difference matters to somebody.
For example, a system like Coq (https://coq.inria.fr/) is concerned with formal proofs, but is not really a foundation for most of science.
If this advice is followed, we would have to knowingly choose to do something that we expect to be worse than another option we're considering. What do we hope to gain from that?
"Appeal to disgust" is a fallacy for a reason, and laws should be rational.
Do we value keeping fish alive more than keeping old people alive via power-hungry air conditioning?
There are millions of similar conundrums, none of which are solvable via appeal to objective truths.
Put another way, you can only have a logical argument with someone who already agrees with your beliefs.
Technically true, but that's an easy (and all too frequently used) way to dismiss other people's arguments and keep believing whatever you want.
It's very unlikely that any real-world policy question is a question about your fundamental value. You most likely don't value marijuana intrinsically. The future of a river dam is probably not the real center of your morality. Rational reasoning is useful to illuminate connection between the subject of an argument and your moral values. A connection that usually is pretty complex, and allows for honest and patient enough people to reach the same conclusion given the same knowledge about the problem.
Consider the towns and cities by bodies of water that have either dried up or been polluted. It's financially ruinous because it either/or kills tourism and/or fishing industry; the salty dust left behind by water causes horrible health problems; there's natural brain/brawn drain that leaves only the older and sickly behind to either fend for themselves or be on gov pay check. Just off the top of my head. I can easily think up a lot more effed up scenarios that have already happened in the world (see Aral Sea).
But along the way to the unfortunate end you describe above, many good things came of the dam. Any calculus needs to evaluate the tradeoffs. "Were they worth it?" doesn't lend itself to objective-factual analysis, because "worth" is all about values.
Those tradeoffs are "hard to decide via objective truths" only because they haven't been decomposed enough. You have to dig deeper into them, split them apart, ask why do we care about something, and then ask again and again. It might get hard to keep track of the expanding tradeoff space, which is probably why people shy away from digging into things too deep. But if you do the work, you may find out that there is a way to solve the original problem in a way that satisfies everyone's low-level values.
Reminds me of essays of Bret Victor, who occasionally seems to be making a related point - a lot of policy debates quickly devolve into appeals to emotions, because we lack tools to effortlessly decompose and analyze actual problems. See e.g. http://worrydream.com/ClimateChange/#media for a good example.
So are engineering problems.
No, it's not the only way. It's the easy way.
The hard way is to consider other consequences of both choices, including long-term effects, including changes that aren't 0%-100%, but slightly change probability of other events.
Hurting the ecosystem might increase the probability of flooding the area, which will hurt some of the people that you are trying to comfort by giving them cheaper energy. So, maybe it's objectively irrational to build the dam, no matter whether you value people or fish more?
Maybe the population in the area is decreasing anyway and cheap power won't help? We can predict such things pretty reliably. So why build a huge dam and hurt environment if in 40 years it will be useless?
It's impossible to know for sure, but it's possible to estimate. Abandoning the rationality altogether just because it's not immediately obvious what to do is very irresponsible.
I know it was just an example, but IMHO it's a representative example. Many of the issues in the center of the public debate now are objectively scientifically solved, or so close to be solved, that it doesn't matter what the exact details are.
Yet people continue to argue about them because of tribal thinking, religion, corporate interests, etc. See global warming, equality for sexual orientations, war on drugs, public vs private healthcare, etc.
OC proposed that human element is inherently unpredictable, so rationality isn't the solution. Yet - thanks to the law of big numbers - big groups of people are more predictable than small groups of people or individual people. So we may not know who will vote X or Y, but we can predict how many people will vote X and Y with reasonably small error. We have big data, that can predict you're pregnant before you know you're pregnant, just basing on you browser history FFS.
Seriously, a better way would be to start doing 5-whys on this, instead of going for "non-objective things". A quick attempt will tell you that we care for both ecosystem and power, but depending on the local context, we might care more about one or the other. Maybe we can import power from elsewhere, or put a dam in a place where it won't disturb ecosystem so much. Maybe the ecosystem can handle the dam, or really the power is desperately needed.
You can quantify this all if you bother. Problem is, most people don't bother, they very quickly stop thinking about it, and start going by their gut.
(I'd say, they stop behaving like adults, but since a lot of adults have this problem, I guess we need a different phrase.)
Caring more about one or the other is exactly the issue. The reasoning about the degree of which to care about either at all is the issue that cannot be resolved objectively.
Putting a dam in place that won't impact the ecosystem so much--sure, we can minimize the tradeoffs, but the tradeoffs remain, and all resolve to what we value vis a vis what it costs. Which is not objective.
Think of it as akin to mathematics. Mathematics starts with axioms, which can be completely arbitrary, and then derives an entire universe from them by following only objective, formal reasoning. Similarly here, you have some personal values that are (to some extent) arbitrary, but accepting those, you get to use objective reasoning to devise a set of tradeoffs that satisfies those values the best.
Also sometimes different people with different values aren't really in conflict, they just don't have enough knowledge to realize their values aren't being violated. Fix that, and you can get agreement.
Why are those non-objective?
There is no objective answer to that question.
Both the fish and the electricity need to be considered in a long term utilitarian framework.
The only reason you seem to have two conflicting objective truths is because your time window is too short.
You have an infinite regress problem here. You eventually are going to have to fall back on non objective feelings about how the world should be to justify your actions.
Depending on how you define "utility" - this is either a meaningless tautology that doesn't say anything at all, or a subjective belief that can be easily challenged by equally subjective alternatives.
Evidenceless beliefs are what need to dictate the goals of policy, because there is no other way to decide on a goal. Once the goal has been decided, objective truths must be what determine the best way to reach that goal.
I disagree: I am strongly opposed to force by violence (laws) to prevent people from doing stupid things to themselves. In this sense, I do not necessarily consider saving lives a good thing (of course, there do exist lots of cases where I do).
P.S.: I strongly recommend wearing seatbelts, but am opposed to seatbelt laws.
From what I've been told and have read, the fines for not wearing a seatbelt are less about preventing people from making a potentially life threatening decision and more about preventing people from making a really bloody mess that tax payers then have to pay to clean up. A dead body strapped inside of a car is a lot easier to clean up than a dead body thrown 60 feet down the road and torn into who knows how many pieces in the process. Not to mention that what is the equivalent to a human cannonball can cause further damage to property and people who are unfortunate enough to be in the way.
Thinking of the fine as "Increasing Cost to Society" instead of "Preventing you from endangering yourself" makes it more agreeable to me - as I think we agree on the stance that laws that prevent people from doing stupid things to themselves aren't good laws. Laws that reduce wasteful costs to society so that funding can be put into less wasteful things, in my opinion at least, are good laws though.
Since most laws against preventing people from doing stupid things to themselves saves wasteful spending - I agree with them within that framing of the picture.
You are forcing folks to internalize an externality. Same with carbon cap & trade schemes. Same with helmet laws. Same with toll roads. Etc...
We already do this in a quite literal way with money, the price of things follows the subjective valuation of those things, so for example if a bit of paint on canvas in a frame which would otherwise only be worth $40 is the Mona Lisa, then it is suddenly (and seemingly magically) worth $800 million. You could use this to evaluate joy and harm as well, in the form "what is the maximum amount of money you would spend to be able to eliminate this harm or to experience this joy?"
Is it? Any kind of value system must presuppose the value of life because only life can hold something valuable. Value systems that do not are therefore inconsistent.
It would be a rather silly value system for a human to have, but there wouldn't be any inconsistency in it.
With the invention of machines we could replace life with more efficient automated systems that produce paper clips, but stupid machines cannot adapt to changing circumstances, like outside threats in the form of asteroids, climate change, earth quakes, and so on. Therefore the machines should be at least as intelligent as possible so as to adapt to any circumstances and ensure the continued production of paper clips.
Intelligent machines are simply another form of life. This obviously wouldn't happen overnight, but with the gradual introduction of more and more technology, the creators would more and more integrate with their machines.
Any value system that denies the value of life is simply inconsistent.
I don't want to live in that world, though. I think this as a case for the Brexit position only works if you can demonstrate that the people will in fact be materially better off (not in money but in well-being, which can factor in feelings like the one the author speaks about) after Brexit. That's a hard thing to demonstrate scientifically, since you can't run an experiment, but that's argumentative stance you'd need to take to convince me.
One reason why I disagree with [technocracy] is because of a core assumption that is embedded deep within it, namely that public policy is at root a values-neutral project. In this worldview, there is an objective Good that we all strive towards, and our progress as a society can be measured by the velocity towards which we approach that objective Good. We can determine this velocity by taking measurements — by gathering Data. These data will then tell us if we are on or off course, in much the same way that star sightings can do for a mariner lost at sea.
This works for the mariner, because the stars are objective. It is not a matter of opinion where in the sky the North Star is. But “good,” in terms of public policy, is most definitely not objective. My definition of what is Good is informed by my background, my experiences, my ideology; my values. And your definition of what is Good is informed by yours. Your North Star, in other words, is in a different place than mine is — which makes trying to navigate by taking sightings of it a perilous proposition.
What if we invented some gene therapy that could make everyone instantly and painlessly racially ambiguous? All looking like Simpsons characters or something. We could just poof and everyone would be the same! It would go so far towards eliminating racism and prejudice, which is unarguably a good goal. Some questions, though:
1. Would this really enrich human experience? Would it be "right"?
2. If someone didn't want the treatment would it be right to force them to accept it, even though it might end racism?
3. Would this actually end prejudice?
4. Could this technology ever exist?
The authors answers would be: "Nope", "Definitely not", "Almost certainly no", and "If it couldn't or we have no plans to make it, why even discuss it?" Now think about it like this:
1. Will Facebook having "better algorithms" end fake news?
2. Will Google and Apple adding fancy new apps end smartphone addiction?
3. Will cute little notes from Discord in their app end our huge lack of participation in democracy?
The author would say no to all of them, and he's right.
Cause even when it's well meaning, I still can't stand it. Their (our I suppose) own conviction they they are right fuels a randian like commitment to individualism. But then they want to appear woke and smart and so they start talking about basic income and how it's fine to have half the population just sort of subsisting. Or we'll fix democracy with smartphones and blockchain.
Also the tendency towards not just being temporarily embarrassed millionaires, but temporarily embarrassed tech billionaires.