Hacker News new | comments | show | ask | jobs | submit login
The world is not an engineering problem (theviewfromcullingworth.blogspot.com)
70 points by jajag 9 days ago | hide | past | web | favorite | 71 comments





Like hell it isn't.

At the risk of going "no true scottsman", I hate this kind of anti-intellectual nonsense. The methods of science, engineering and rational reasoning are the best tools we have for solving world problems. They were literally invented for it. They're reified applied reasoning. You eschew them at your own peril.

Now of course to solve problems involving humans, you have to factor in the human element - the complexity of our individual emotions, and of the societal interplay. This is where a lot of attempts at solving problems fail, and it's appropriate to criticize that. But that doesn't give you a free pass to go by your gut in contexts our monkey brains have never experienced in the past. This means we need to double-down and carefully, rationally, find out what works and what doesn't.


I don't think this essay is anti-intellectual at all. I think it is an exploration of the longstanding philosophical problem of trying to rationally design a world for irrational beings. The excerpt from Dillow on consequentialism brings to mind an idea from James Scott: that elites control policy discussions by limiting the accepted terms of the argument to legible, rationalizable factors - generally economic and technocratic variables visible from a top-down view. This can place serious limits on our access to the store of human knowledge embedded in emotion and practice. What we consider "irrational" impulses are sometimes the products of generations of evolutionary tuning that would seem to be feats of engineering brilliance if they came with the proper documentation. Meta-rationality includes exploring the bounds and the weaknesses of the tools in our contemporary rationalist arsenal, as well as searching under the couch cushions of irrationality for the occasional bit of value.

Thanks for the succinct account of irrational impulses, and for the James Scott ref. Can you recommend any particular book of his? My local library appears to have _Against the Grain_ on hand.

Against the Grain is great. I'm drawing upon Seeing Like A State and The Art of Not Being Governed.

>This is where a lot of attempts at solving problems fail

I think this is where a lot of attempts at defining problems fail. I don't think this has any impact on the validity of the scientific method of problem solving, once you've clearly defined your problem.


That too. Not much good comes from efficiently "solving" what turns out to not be the real problem. You have to start factoring in the human element at the very top, at the level of problem discovery and definition.

>The methods of science, engineering and rational reasoning are the best tools we have for solving world problems.

How do you feel about the fact that was is rational and irrational (indeed our whole meta-rational basis) has changed in time according not to new discoveries but merely the movement of the cultural zeitgeist[0]? What about the differences between instrumental reason (and rationality), technological rationality in which modern technology changes what we see as rational (see Marcuse) and rationality of ends, rather than just the means?

My point is that we can't simply say "rationality" as if it's a fixed tool we can use so easily, at least, we can't use it without also considering the current ideological context in which we are using it. Surely you admit there are different rationalities, such as economic rationality, which would seem to eschew any human factor that does not count towards profit efficiency?

[0] https://en.wikipedia.org/wiki/Eclipse_of_Reason_(Horkheimer)


The way I see it, rationality doesn't choose your values for you, the same way mathematical reasoning doesn't choose your axioms for you. It's not a fixed tool in a sense that we're still discovering its various methods, but it is a tool that should not depend on cultural context - because what works to achieve your goal or correctly predict consequences doesn't change even as your goal might.

Or, in other words, the principle of "garbage in, garbage out" applies.


Isn't rationality the principle of GIGO, though? Let's say I have a garbage idea, but my methods for acheiving it are rationally constructed - isn't this even a little problematic? Surely me must judge not only the means (and ignore any aspect of the goal) but to look and examine whether the goal is a good too.

But if your claim is that rationality is independent of culture (which, as I've already noted, is rubbish, since the idea of what is rational and irrational changes with cultural zeitgeist, unless you have some claim to objective rationality which I don't) then we can't use a static methods of rationality to judge the goodness of a particular goal, which is obviously context-dependent. So if not rationality, then what? I suggest critical thinking in place.


> Isn't rationality the principle of GIGO, though? Let's say I have a garbage idea, but my methods for acheiving it are rationally constructed - isn't this even a little problematic? Surely me must judge not only the means (and ignore any aspect of the goal) but to look and examine whether the goal is a good too.

That's what I was trying to say in my previous comment. Even if you execute rational reasoning perfectly, if you start with garbage input, you end up with garbage output. If you intrinsically value something horrible, e.g. murdering kittens, rationality isn't going to tell you to not do it; on the contrary, it'll suggest the most effective means of doing that.

> But if your claim is that rationality is independent of culture (which, as I've already noted, is rubbish, since the idea of what is rational and irrational changes with cultural zeitgeist, unless you have some claim to objective rationality which I don't)

I meant it in the same way in which 2+2=4 or F=ma are independent of culture. Of course the work of discovering rational reasoning was happening over long time, with important contributions in Ancient Greece and earlier. But we've only recently managed to get some good formal tools applicable to real-life problems of reasoning, namely probability theory.

> then we can't use a static methods of rationality to judge the goodness of a particular goal, which is obviously context-dependent. So if not rationality, then what? I suggest critical thinking in place.

What do you suggest this "critical thinking" is made of? As for judging the goodness of a particular goal - you can decompose that into parts that can be judged rationally on their effectiveness (and consistency), and parts that have to be judged morally.


You're a better person than me. Since you responded, I tried to read the OC. Word salad.

To your point, I now believe all design problems are human problems, which sometimes involve math and science.

I read Neal Postman's Technopoly and Donald Norman's Design of Everyday Things around the time I was figuring out that computers weren't delivering on their promises in the areas I care about (CAD for A/E/C, education), and in some cases made things worse. Validation for what I was experiencing, observing.

I credit Postman and Donald Norman for my conversion from technophile to humanist.

I don't have anything to say about rationality. I gave up (forfeited) after the 2016 USA elections.


The Brexit rhetoric even though it probably was meant in abstract terms seemed rather romanticist which I see as a danger in itself - dogmatically clinging to abstract ideas because of feelings regardless of outcomes.

One thing that I feel is always missed in discussions of consequentialism are second order effects which are what make decisions untenable. While the classic "murder a stranger for organs" might technically save more lives it causes all sorts of nasty ripple effects - people would rightfully become more paranoid or take measures to ensure uselessness of their organs after death for sheer strategic spite. Taking a stand for principles is still possible in that framework - the goal being to make bad actions more expensive or good actions cheaper.

Still there are certainly good points about needing to choose how one wants to shape society while setting goals - as well as recognizing that society doesn't go as planned or projected and it changes in response to your actions.


probably was meant in abstract terms seemed rather romanticist which I see as a danger in itself - dogmatically clinging to abstract ideas because of feelings regardless of outcomes.

Found myself in a perplexing discussion not long ago that-after a lot of words that seemed to operate similarly to what you're describing here.

The topic was over banning plastic bags versus something with more politically loaded outcomes like Brexit, though. The argument the other person brought up was to equivocate banning plastic bags with banning plastic straws and how this constituted a real harm to the disabled community and those who may have a valid need for straws--forcing me to ask if a better outcome for someone with a physical impairment would be to promote durable, fabric bags that are less like to suffer to sudden structural failure (overfilling a plastic bag that rips open unexpectedly), and where the logical limit was in comparing the harm of removing plastic straws and removing plastic bags.

They clung to the comparison that banning bags was equivalent to robbing the physically disabled from needed resources but couldn't really articulate any position beyond that emotional appeal to a community who-I conceded-should be considered more than they probably are. Comparing bags to straws on the grounds presented didn't feel like it was very outcome oriented, though.

It's an endlessly interesting phenomenon of thought to watch take place in real time.


> For my part, I prefer things a little messy because not only are the solutions so often dependent on coercion but they also require that the ordinary citizen's faith and feelings are denied. Maximising utility seems a good thing but it is not the main reason why people do things like set up business, create charities, build village halls, paint, sing, create or innovate. Technocracy treats the world as an engineering problem when it's an unfolding story, explorers in a dense jungle not white-coated scientists in a laboratory.

There's a very famous fragment of Polish romantic ballad "Romantyczność" by Adam Mickiewicz:

    Feeling and faith stronger speaks to me,
    than the eye and the glass of a wise man.
It was written in 1821 in occupied Poland, and (together with other similar literature) was responsible for creating nationalist romantic movement that caused several failed uprisings, countless deaths, and whole generations of educated patriots being forced to migrate abroad escaping repressions.

When Poland got independent in 1918 (mostly thanks to a good luck and WW1, just like other countries in the region) - this attitude claimed the success ("if not for failed uprisings we wouldn't be here"), and people believed it. Failed uprisings are celebrated to this day, and a few rational generals who wanted to prevent the useless massacre and were hanged because of that - are still considered traitors.

Then it caused governments of interwar Poland to pursue unrealistic and opportunistic strategy that resulted in 1/6th of the population and 1/7th of the territory being lost in WW2 despite supposedly being on the winning side. But it sure felt nice to be brave and be the "first to fight". People are still boasting "Poland - first to fight" like it's a good thing to be stupid and die for no reason.

This national romanticism is still very much defining the public debate in Poland together with the only mainstream alternative - positivism and pragmatism. And romanticism is still winning - 200 years later. We've only got +-25 years of pragmatism after 1989, but it's over now.

It's why populists can win elections - because people want to ignore reality and stop analyzing it. "Just do what feels right, it will be OK for sure." "Winning trade wars is so easy". Everything is easy if you ignore reality because it's too complex.

It's a very harmful attitude. Don't let it take over your culture, it's very hard to get back to the enlightenment once you leave it behind.


The idea here is something we've lost from our thinking, one of those virtues Deirdre McCloskey writes about, the idea of faith, that there are things we have to take as felt not as demonstrated by science.

Even if we don't understand everything scientifically now, even some things that we feel, does not mean that we cannot try to do so in the future. I accept faith only as a step towards scientific understanding.


My favorite part of Antifragile was where the author defines the 'Soviet-Harvard delusion' as "the unscientific overestimation of the reach of scientific knowledge".

Let us know when you find formal proofs for human values or systems of morality. That's almost certainly Nobel Prize-worthy.


I mean, it seems a bit obvious to me that human values and morality are going to be reflections of fundamental human drives (i.e. to survive and procreate). The cultural, legal and economic environment can change a lot of variables as to how we achieve these two goals, as can our perception of the environment which we live in, but at the end of the day people are still going to be acting in accordance with their evoluationarily programmed drives within the context of their environment. I don't see how we could do any differently.

Science isn't about formal proofs

Science is about verifiable proofs (verifiable within reasonable time limits) which somewhat means the same thing as formal proofs in the grand scheme of things. Correct me if I am wrong

Edit: forgot to put in time limits


"Science", in the broadest sense, is only concerned with the systematic study of the physical and natural world through observation and experimentation, with the goal of explanation and predictability. To that end, science is reliant of empirical evidence, not formal proofs.

The various disciplines that generally rely on "formal proofs" are referred to as "formal sciences" (logic, mathematics, statistics, etc.), but are technically not actual "sciences" since they are fundamentally abstract (as opposed to how we defined science above) - which is why they are generally concerned with formal proofs and not empirical evidence. Of course the formal sciences frequently provide the natural sciences and social sciences with ways to describe the physical/natural world and the social world, respectively.


You're wrong.

Science is about building and refining models (theories) in order to make them match observable reality as close as possible, in order to use those models to predict what happens in the future - both by itself, and in consequence of us poking stuff. That's what it means to know "how something works".

Engineering takes these models and adds a "what's the best way to poke things to achieve a desired outcome?" aspect.

Formal proofs are for mathematicians. Mathematics is a purely abstract invention and operates in its own universe, where absolute formal proofs are possible.


Mathematics is no different. Our brains are simply computers that verify that the system evaluates based on it's rule system. In fact, you can bootstrap an empirical verification of any mathematical proof holding if sufficient human brain-computes evaluate it, and determine it holds.

You're getting a lot of flak for conflating verifiability with formal proofs.

In your defense, science appears to follow formality, though we run into often run into holes in our theory when a confounding number of variables are in play.

Me, personally, I don't know what that difference really is. Science is science because it works. If it didn't work, it wouldn't be science.

Mathematics can and will find ways to apply itself to the real world, no matter how approximately, as long as that difference matters to somebody.


Formal proof typically refers to a branch of mathematics / philosophy concerned with the symbolic manipulation of formal systems in order to prove axioms about representations of real world objects.

For example, a system like Coq (https://coq.inria.fr/) is concerned with formal proofs, but is not really a foundation for most of science.


I don't see an alternative offered here. The argument seems to be "don't do things just because you think they'll produce the best result". Which...makes no sense?

If this advice is followed, we would have to knowingly choose to do something that we expect to be worse than another option we're considering. What do we hope to gain from that?


Public policy should only be based on objective truths and not evidenceless beliefs or feelings. It's one thing to be for or against marijuana usage; it's another to still have a law on the books banning its use and sale when a growing plethora of science says it's far safer than substances already legalized and regulated (such as alcohol) and actually therapeutic in some cases (such as cancer). It's one thing to be against pornography being freely available on the Internet personally; it's another when scientific data (hypothetically) indicates that pre-adolescent exposure to pornography incurs long-term and tangible behavioral harms, BUT does not seem to harm adults nor their marriages.

"Appeal to disgust" is a fallacy for a reason, and laws should be rational.


"Damming this river will hurt the surrounding ecosystem" and "Damming this river will provide power for thousands of people" are both objective truths. The only way to mediate between them is by discussing non-objective things like how we value fish and the services provided by electricity.

Do we value keeping fish alive more than keeping old people alive via power-hungry air conditioning?

There are millions of similar conundrums, none of which are solvable via appeal to objective truths.


Exactly. The way I think about it is that logic is a wonderful tool for ensuring your beliefs are consistent and helping you implement them effectively, but it's incapable of determining what those views should be.

Put another way, you can only have a logical argument with someone who already agrees with your beliefs.


> Put another way, you can only have a logical argument with someone who already agrees with your beliefs.

Technically true, but that's an easy (and all too frequently used) way to dismiss other people's arguments and keep believing whatever you want.

It's very unlikely that any real-world policy question is a question about your fundamental value. You most likely don't value marijuana intrinsically. The future of a river dam is probably not the real center of your morality. Rational reasoning is useful to illuminate connection between the subject of an argument and your moral values. A connection that usually is pretty complex, and allows for honest and patient enough people to reach the same conclusion given the same knowledge about the problem.


I agree with you that it shouldn't be used to dismiss arguments, but I do think most contentious policy issues boil down to to fundamental values.

> Do we value keeping fish alive more than keeping old people alive via power-hungry air conditioning?

Consider the towns and cities by bodies of water that have either dried up or been polluted. It's financially ruinous because it either/or kills tourism and/or fishing industry; the salty dust left behind by water causes horrible health problems; there's natural brain/brawn drain that leaves only the older and sickly behind to either fend for themselves or be on gov pay check. Just off the top of my head. I can easily think up a lot more effed up scenarios that have already happened in the world (see Aral Sea).


The grandparent is meant as an example of one of the many tradeoffs that are hard to decide via objective truths. The exact details are not significant to the larger point.

But along the way to the unfortunate end you describe above, many good things came of the dam. Any calculus needs to evaluate the tradeoffs. "Were they worth it?" doesn't lend itself to objective-factual analysis, because "worth" is all about values.


> one of the many tradeoffs that are hard to decide via objective truths.

Those tradeoffs are "hard to decide via objective truths" only because they haven't been decomposed enough. You have to dig deeper into them, split them apart, ask why do we care about something, and then ask again and again. It might get hard to keep track of the expanding tradeoff space, which is probably why people shy away from digging into things too deep. But if you do the work, you may find out that there is a way to solve the original problem in a way that satisfies everyone's low-level values.

Reminds me of essays of Bret Victor, who occasionally seems to be making a related point - a lot of policy debates quickly devolve into appeals to emotions, because we lack tools to effortlessly decompose and analyze actual problems. See e.g. http://worrydream.com/ClimateChange/#media for a good example.


> "worth" is all about values.

So are engineering problems.


> The only way to mediate between them is by discussing non-objective things like how we value fish and the services provided by electricity.

No, it's not the only way. It's the easy way.

The hard way is to consider other consequences of both choices, including long-term effects, including changes that aren't 0%-100%, but slightly change probability of other events.

Hurting the ecosystem might increase the probability of flooding the area, which will hurt some of the people that you are trying to comfort by giving them cheaper energy. So, maybe it's objectively irrational to build the dam, no matter whether you value people or fish more?

Maybe the population in the area is decreasing anyway and cheap power won't help? We can predict such things pretty reliably. So why build a huge dam and hurt environment if in 40 years it will be useless?

It's impossible to know for sure, but it's possible to estimate. Abandoning the rationality altogether just because it's not immediately obvious what to do is very irresponsible.

I know it was just an example, but IMHO it's a representative example. Many of the issues in the center of the public debate now are objectively scientifically solved, or so close to be solved, that it doesn't matter what the exact details are.

Yet people continue to argue about them because of tribal thinking, religion, corporate interests, etc. See global warming, equality for sexual orientations, war on drugs, public vs private healthcare, etc.

OC proposed that human element is inherently unpredictable, so rationality isn't the solution. Yet - thanks to the law of big numbers - big groups of people are more predictable than small groups of people or individual people. So we may not know who will vote X or Y, but we can predict how many people will vote X and Y with reasonably small error. We have big data, that can predict you're pregnant before you know you're pregnant, just basing on you browser history FFS.


> The only way to mediate between them is by discussing non-objective things like how we value fish and the services provided by electricity.

Seriously, a better way would be to start doing 5-whys on this, instead of going for "non-objective things". A quick attempt will tell you that we care for both ecosystem and power, but depending on the local context, we might care more about one or the other. Maybe we can import power from elsewhere, or put a dam in a place where it won't disturb ecosystem so much. Maybe the ecosystem can handle the dam, or really the power is desperately needed.

You can quantify this all if you bother. Problem is, most people don't bother, they very quickly stop thinking about it, and start going by their gut.

(I'd say, they stop behaving like adults, but since a lot of adults have this problem, I guess we need a different phrase.)


"we might care more about one or the other"

Caring more about one or the other is exactly the issue. The reasoning about the degree of which to care about either at all is the issue that cannot be resolved objectively.

Putting a dam in place that won't impact the ecosystem so much--sure, we can minimize the tradeoffs, but the tradeoffs remain, and all resolve to what we value vis a vis what it costs. Which is not objective.


Of course the reasoning is objective, once you dig down to the underlying "non-objective" values.

Think of it as akin to mathematics. Mathematics starts with axioms, which can be completely arbitrary, and then derives an entire universe from them by following only objective, formal reasoning. Similarly here, you have some personal values that are (to some extent) arbitrary, but accepting those, you get to use objective reasoning to devise a set of tradeoffs that satisfies those values the best.

Also sometimes different people with different values aren't really in conflict, they just don't have enough knowledge to realize their values aren't being violated. Fix that, and you can get agreement.


The axioms are precisely what is not objective.

Yes. But everything above them is.

> The only way to mediate between them is by discussing non-objective things like how we value fish and the services provided by electricity.

Why are those non-objective?


Which is more valuable, a fish or a person's comfort?

There is no objective answer to that question.


>There is no objective answer to that question.

Prove it.


>The only way to mediate between them is by discussing non-objective things

Wrong.

Both the fish and the electricity need to be considered in a long term utilitarian framework.

The only reason you seem to have two conflicting objective truths is because your time window is too short.


> Both the fish and the electricity need to be considered in a long term utilitarian framework.

Says who?

You have an infinite regress problem here. You eventually are going to have to fall back on non objective feelings about how the world should be to justify your actions.


Right. We're going to fall back on "we should maximize utility because that's better than less utility". But we're not going to have to fall back on how it makes me feel to see dead fish or somesuch.

> "we should maximize utility because that's better than less utility

Depending on how you define "utility" - this is either a meaningless tautology that doesn't say anything at all, or a subjective belief that can be easily challenged by equally subjective alternatives.


Yeah, but the way between the real problem and fundamental non-objective feelings is a pretty long one, and on the way you may discover that differences in those feelings aren't that great, and they all can be satisfied in many ways - which means you can devise a solution that's acceptable to every reasonable person. Sure, it may not always work, but it's worth trying.

Taking that argument to its logical conclusion, there should be no public policy whatsoever. Every single policy has at its roots an evidenceless belief. As an extreme example, "Seat belts save lives" is an objective truth, but "Saving lives is a good thing" is an evidenceless belief. That is it believed by the vast majority of the human race in no way constitutes evidence.

Evidenceless beliefs are what need to dictate the goals of policy, because there is no other way to decide on a goal. Once the goal has been decided, objective truths must be what determine the best way to reach that goal.


> As an extreme example, "Seat belts save lives" is an objective truth, but "Saving lives is a good thing" is an evidenceless belief. That is it believed by the vast majority of the human race in no way constitutes evidence.

I disagree: I am strongly opposed to force by violence (laws) to prevent people from doing stupid things to themselves. In this sense, I do not necessarily consider saving lives a good thing (of course, there do exist lots of cases where I do).

P.S.: I strongly recommend wearing seatbelts, but am opposed to seatbelt laws.


Let's try and give your picture a new frame. It may change your opinion on seatbelt laws! :)

From what I've been told and have read, the fines for not wearing a seatbelt are less about preventing people from making a potentially life threatening decision and more about preventing people from making a really bloody mess that tax payers then have to pay to clean up. A dead body strapped inside of a car is a lot easier to clean up than a dead body thrown 60 feet down the road and torn into who knows how many pieces in the process. Not to mention that what is the equivalent to a human cannonball can cause further damage to property and people who are unfortunate enough to be in the way.

Thinking of the fine as "Increasing Cost to Society" instead of "Preventing you from endangering yourself" makes it more agreeable to me - as I think we agree on the stance that laws that prevent people from doing stupid things to themselves aren't good laws. Laws that reduce wasteful costs to society so that funding can be put into less wasteful things, in my opinion at least, are good laws though.

Since most laws against preventing people from doing stupid things to themselves saves wasteful spending - I agree with them within that framing of the picture.


To simplify what you just wrote.... "Increasing Cost to Society":

You are forcing folks to internalize an externality. Same with carbon cap & trade schemes. Same with helmet laws. Same with toll roads. Etc...


The example there wasn't as something I was trying to convince people of, but rather to show how unobservable beliefs are necessary to provide value judgments on observables. You have different set of evidenceless beliefs, including "Laws are threats of force by violence, and should therefore have high bars to implementation." These evidenceless beliefs then influence which actions should be taken.

Why are you opposed to force by violence?

Good point technically but I think some subjective valuations are widespread enough to be able to be considered objective, such as "alive/animate things are better than dead/inanimate things", and "joy is superior to pain", and "murdering is bad". And regarding pain/pleasure, you can always do what doctors do and ask patients to assign the intensity a number from 1-10, which is basically a hack that takes a qualitative subjective valuation into the quantitative objective world where it can then be reasoned about.

We already do this in a quite literal way with money, the price of things follows the subjective valuation of those things, so for example if a bit of paint on canvas in a frame which would otherwise only be worth $40 is the Mona Lisa, then it is suddenly (and seemingly magically) worth $800 million. You could use this to evaluate joy and harm as well, in the form "what is the maximum amount of money you would spend to be able to eliminate this harm or to experience this joy?"


I disagree that joy is superior to pain. They’re both necessary parts of life and one without the other is indistinguishable from oblivion. Plenty of people would disagree about alive/animate versus dead/inanimate and in ways that would probably surprise you. I like mountains, I think they’re certainly more interesting and beautiful than pigeons. As for “murder is bad”, plenty of people disagree vehemently about the definition of murder. Is killing to defend someone else murder? What about killing someone during war time? Those aren’t even close to being settled issues and are absolutely matters of belief that are not provable in any scientific sense.

I think the distinction is a useful one to have, because it lets you identify the appropriate way to try to convince somebody. If people disagree about the effectiveness of policies, and which ones will bring about the desired effects, then additional evidence should be found and examined to determine who is correct. On the other hand, if people disagree about what the goal is, then no amount of evidence will convince them, because goals are unobservable.

> "Saving lives is a good thing" is an evidenceless belief.

Is it? Any kind of value system must presuppose the value of life because only life can hold something valuable. Value systems that do not are therefore inconsistent.


I don't see any inconsistency there. For example, I could imagine a value system that solely values the existence of paper clips. Actions that increase the number of paper clips in the universe are good, while actions that decrease the number of paper clips in the universe are bad. Life has value only in as much as it causes paper clips to exist.

It would be a rather silly value system for a human to have, but there wouldn't be any inconsistency in it.


If paper clips are the highest good, then ensuring that paper clips will be made in perpetuity is also of utmost importance. Only life is capable of knowing the goodness of paper clips, ergo we should ensure life's continuance so as to ensure the continued production of paper clips.

With the invention of machines we could replace life with more efficient automated systems that produce paper clips, but stupid machines cannot adapt to changing circumstances, like outside threats in the form of asteroids, climate change, earth quakes, and so on. Therefore the machines should be at least as intelligent as possible so as to adapt to any circumstances and ensure the continued production of paper clips.

Intelligent machines are simply another form of life. This obviously wouldn't happen overnight, but with the gradual introduction of more and more technology, the creators would more and more integrate with their machines.

Any value system that denies the value of life is simply inconsistent.


Parent is a nice statement of the position the author opposes. It has the benefit of being irrefutable too by not admitting the premises necessary to do so.

Kings have ever considered their own power as an end to itself, even if the small people have to suffer. They probably made arguments very alike to this one to justify the wars fought over another king's acres. And some of the small people bought into it too.

I don't want to live in that world, though. I think this as a case for the Brexit position only works if you can demonstrate that the people will in fact be materially better off (not in money but in well-being, which can factor in feelings like the one the author speaks about) after Brexit. That's a hard thing to demonstrate scientifically, since you can't run an experiment, but that's argumentative stance you'd need to take to convince me.


I wrote something similar from the opposite end of the ideological spectrum a few years back: https://jasonlefkowitz.net/2014/01/against-line-chart-libera...

One reason why I disagree with [technocracy] is because of a core assumption that is embedded deep within it, namely that public policy is at root a values-neutral project. In this worldview, there is an objective Good that we all strive towards, and our progress as a society can be measured by the velocity towards which we approach that objective Good. We can determine this velocity by taking measurements — by gathering Data. These data will then tell us if we are on or off course, in much the same way that star sightings can do for a mariner lost at sea.

This works for the mariner, because the stars are objective. It is not a matter of opinion where in the sky the North Star is. But “good,” in terms of public policy, is most definitely not objective. My definition of what is Good is informed by my background, my experiences, my ideology; my values. And your definition of what is Good is informed by yours. Your North Star, in other words, is in a different place than mine is — which makes trying to navigate by taking sightings of it a perilous proposition.


I think we disagree that laws are suppose to be a mechanism for achieving “good.”

Cause that's where the truth comes from ladies and gentlemen, the gut.

This word soup reads like it was generated by the Postmodernism Paper Generator [1]. What am I supposed to take away from this? Is there a TL;DR version?

[1]: https://en.wikipedia.org/wiki/Postmodernism_Generator


"The world is not an engineering problem." Pretty clear if you ask me.

What if we invented some gene therapy that could make everyone instantly and painlessly racially ambiguous? All looking like Simpsons characters or something. We could just poof and everyone would be the same! It would go so far towards eliminating racism and prejudice, which is unarguably a good goal. Some questions, though:

1. Would this really enrich human experience? Would it be "right"? 2. If someone didn't want the treatment would it be right to force them to accept it, even though it might end racism? 3. Would this actually end prejudice? 4. Could this technology ever exist?

The authors answers would be: "Nope", "Definitely not", "Almost certainly no", and "If it couldn't or we have no plans to make it, why even discuss it?" Now think about it like this:

1. Will Facebook having "better algorithms" end fake news? 2. Will Google and Apple adding fancy new apps end smartphone addiction? 3. Will cute little notes from Discord in their app end our huge lack of participation in democracy?

The author would say no to all of them, and he's right.


Hasn't this line of questioning been well-examined, though? It seems like the author is just re-inventing Greek mythology. Does anyone actually believe that the world is an engineering problem? It seems like the author is committing several rhetorical fallacies just to seem intellectual and bait people.

to engineers it is. how the hell are they going to make a living otherwise? something being a problem of any type is just an opinion anyway... :D

Yeah, democracy isn't "rule by those who know". That's our problem, nowadays, and why people by the millions go vote for Trump, Brexit, Bolsonaro, etc. Because of the smug "know-it-all".

I was hoping this would be much a critique of the tendency of people to place too much faith in technology, to ignore the human element of solutions, and (among SV types especially) to prize their own goals above all else. To try and come up with purely technical solutions to the problems of society, but to do so by trying to create an environment which lets their own capitalist goals succeed while tossing scraps to the rest.

Cause even when it's well meaning, I still can't stand it. Their (our I suppose) own conviction they they are right fuels a randian like commitment to individualism. But then they want to appear woke and smart and so they start talking about basic income and how it's fine to have half the population just sort of subsisting. Or we'll fix democracy with smartphones and blockchain.

Also the tendency towards not just being temporarily embarrassed millionaires, but temporarily embarrassed tech billionaires.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: