
The Moral Economy of Tech - jmduke
http://idlewords.com/talks/sase_panel.htm
======
macrael
The fundamental mistake of thinking that technology alone can solve all human
problems (is this what technological liberationists believe? Great name.) runs
deep in this community and comes, I think, of worshipping intelligence.

The entire concept of all powerful AI that must be feared or at least obeyed
springs from this trap. But imagine when we create this AI and ask it "how
should I live" and it replies "eat better and exercise" how will its
intelligence convince me to change my ways better than my doctor does today?
How will more smarts give us peace in the Middle East? These human problems
seem tied up more with the all too human traits of will and desire and pride
and... Intelligence? It's far down the list.

Politics, for better and for worse, is our system for granting humans the
power people here seem to think AI will be able to take on its own. I think
part of Maciej's message is: stop waiting.

~~~
bcoates
You're thinking that the evil AI will be some sort of nagging scold, "eat your
vegetables or I'll tell you to eat your vegetables again". It's more likely
that it'll be like Gawker: "eat your vegetables or I'll send an angry mob of
idiots to burn your house down"

~~~
webmaven
I suspect an effective tactic would be 'eat your vegetables, or I will start
automatically ordering meatless and reduced calorie versions of everything'.

------
richardbatty
This article brings up an important source of bias that tech people risk -
that we overuse models from programming when thinking about other aspects of
the world. We should be learning alternative models from other subjects like
economics, philosophy, sociology, etc so that we can improve our mental
toolbox and avoid thinking everything works like a software system.

I'd say that another related source of bias is that we are surrounded by
people who think like us.

It's a shame though, that the article dismisses without much explicit
justification risk from artificial intelligence and the problem of death. When
I first encountered these ideas, I dismissed them because they seemed weird.
But if you read the arguments for caring you realise that they are actually
well thought-out. For AI risk, check out
[http://waitbutwhy.com/2015/01/artificial-intelligence-
revolu...](http://waitbutwhy.com/2015/01/artificial-intelligence-
revolution-1.html) and [https://www.amazon.co.uk/Superintelligence-Dangers-
Strategie...](https://www.amazon.co.uk/Superintelligence-Dangers-Strategies-
Nick-Bostrom/dp/0199678111). For an argument for tackling death, checkout
[http://www.nickbostrom.com/fable/dragon.html](http://www.nickbostrom.com/fable/dragon.html).

Also, there's a clear answer to 'If after decades we can't improve quality of
life in places where the tech élite actually lives, why would we possibly make
life better anywhere else?' \-- because the tech elite live in a rich society
where most of the fundamental problems (e.g. infectious disease control,
widespread dollar-a-day level poverty, access to education) have been solved.
The remaining problems are much harder and we should focus on problems where
our resources can go further - e.g. in helping the global poor. We should also
work on important problems that we have a lot of influence over, such as risks
from artificial intelligence and surveillance technology.

~~~
forgotuser
>we should focus on problems where our resources can go further - e.g. in
helping the global poor.

I'm not convinced that the right people to help the "global poor" are Silicon
Valley technologists. Scientifically-minded entrepreneurs focused on profit
but with idealistic rhetoric attempting to "help" cultures they know nothing
about has, historically, not worked out great.

~~~
yummyfajitas
From over here in India it looks pretty good.

In spite of the radically different culture, Uber has improved transportation.
It turns out putting Indians into a car and charging them money works a lot
like doing the same for Americans.

Facebook and Whatsapp have improved communication.

At work, Slack, Salesforce and Jira work the same as any western office.

Why, exactly, do you think other cultures and the global poor can't use SV
technologies?

~~~
forgotuser
All those technologies were first developed in the US, for an American target
audience, then exported to other countries. Other things also developed in the
West and then exported: electricity, vaccines, et cetera. I agree that that's
generally been a good thing.

The parent comment, however, seemed to imply that technologists should focus
on "solving global poverty" instead of solving local problems. This represents
a fundamental shift, because at that point SV technologists are attempting to
solve problems they personally do not have experience with.

All the examples you provide solved problems that SV itself had-
transportation, communication, etc.

~~~
yummyfajitas
I didn't interpret the article as defining things like Uber as solving "local"
problems. But if you do, then the article is simply wrong because SV is
manifestly solving local problems.

------
Decade
My problem with these sorts of screeds is that they treat all techies as
complicit. It’s really a problem of big tech companies, and the big tech
companies have so much power because there is money in it, and they have the
money _because humans choose it._ We individually _chose_ to give our freedoms
to huge corporations with unfriendly designs for our lives.

It’s not like we couldn’t see it coming. Stallman, for example, has always
argued against use of Facebook and similar services.[0] It’s just that it’s
much easier to run centralized services, so they improve, they create better
user experiences, and they draw in even more cash, while decentralized Free
Software struggles to get enough attention to stay in business.

I, for one, didn’t get a Facebook until a critical mass of my friends got one,
and I didn’t get a LinkedIn until a government agency refused to give me
service without one. I tried to avoid the surveillance state, but it’s very
difficult to function in society without it. Especially if you don’t have some
sort of fan club to support you, like Stallman has.

[0][https://stallman.org/facebook.html](https://stallman.org/facebook.html)

~~~
laughinghan
If you believe a large fraction of techies are not complicit and in fact are
very against these unfriendly designs, then do you agree with the solution
proposed by the author—just like the solution to the problem of unsafe working
conditions—namely, regulation? After all, if a large fraction of techies are
against this, there's political will, right?

Sure, in your worldview the big tech companies have too much money and power,
but why would it be so much more asymmetrical than the unsafe factory days as
to require a qualitatively different solution?

> I tried to avoid the surveillance state, but it’s very difficult to function
> in society without it.

The author made this point, too, and connected it with the point about unsafe
working conditions and how it is theoretically opt-in but in practice a
highly-discouraged opt-out, which I thought was poignant, especially because
it suggests an easy-to-understand (though difficult-to-implement) solution:
regulation.

Note that he's not just handwaving in the magic word "regulation", he has a
concrete proposal for six specific, privacy-protecting rules:
[http://idlewords.com/talks/what_happens_next_will_amaze_you....](http://idlewords.com/talks/what_happens_next_will_amaze_you.htm#six_fixes)

~~~
Decade
> Note that he's not just handwaving in the magic word "regulation", he has a
> concrete proposal for six specific, privacy-protecting rules:
> [http://idlewords.com/talks/what_happens_next_will_amaze_you....](http://idlewords.com/talks/what_happens_next_will_amaze_you.htm)

Yeah, I’m not so sure that regulations will work all that well. They tend to
entrench incumbents and squash innovation.

He points to Europe for regulations not working correctly, and I think it’s
amazing that Google has greater market share over there than anywhere else.
You would hope that European pride and resourcefulness would promote homegrown
search engines. Nope, the major alternatives outside the US are protected by
totalitarian regimes, and Europe tries to regulate Google, instead.

For example, right to download, right to delete, would only work if you are
definitely identified. Current tracking is probabilistic, supposedly
anonymized. In an ideal world, right to delete would mean no more tracking
that you haven’t opted in. In the real world, I expect that right to delete
would mean you need to be logged in and tracked even more closely. That’s
certainly appealing to the “anti-terror” interests, too.

A big difference of computing versus other tech is just the amount of leverage
that is possible. Mark Zuckerberg was making a web site to connect classmates,
and then it was connecting all the universities, and in no time he’s the #6
richest person according to Forbes. When he started, I don’t think this is how
he thought he would end up. He has simply been guided by his morality of
creepy openness. The amount of change that we’re capable of creating is beyond
our comprehension, and I think it’s unfair to judge our industry by the
disaster stories that make it to public consciousness.

------
robohamburger
I think the best thing that ever happened to me was realizing the thesis of
this article and getting some humility.

I used to be firmly in the software will eat the world camp. Turns out the
world is pretty cool. If you still think like this: travel, study people and
get outside.

Getting to know the world around you and people who are different than you
will make you a better person and a better engineer.

~~~
Nav_Panel
> Turns out the world is pretty cool. If you still think like this: travel,
> study people and get outside.

I agree with the last two points, but why travel? Wasn't one of the points in
the article that we're so busy trying to think broadly -- by going to faraway
places and dreaming of general case solutions -- that we reject the small-
scale case of the people already around you? Unless you mean travel to
Bayview...

From the article:

> I am very suspicious of attempts to change the world that can't first work
> on a local scale. If after decades we can't improve quality of life in
> places where the tech élite actually lives, why would we possibly make life
> better anywhere else? ... We should be skeptical of promises to
> revolutionize transportation from people who can't fix BART, or have never
> taken BART.

~~~
laughinghan
The point in the article about thinking too broadly is a subpoint of the
larger point about the hubris of believing we can solve faraway problems that
we've never seen up close. Both focusing on the local problems that we can see
up close, and getting out there and seeing those faraway problems up close,
could arguably be antidotes to that hubris.

------
dcole2929
I think what he describes here is a fundamental flaw of human nature. We now
have overwhelming evidence that humans are really good at taking incomplete,
disparate pieces of information and coalescing them into a whole to reach a
conclusion. We generally infer very well. The problem is that the conclusions
we reach are also generally wrong. Given a very specific type of domain with
deterministic answers (e.g. most engineering fields) we are able to reach more
often than not correct answers. And the rate at which we are correct has only
increased (nearing 100%). However, when it comes to the messy domains in which
we live we are still pretty poor. Our instincts are generally good enough to
keep us alive (don't walk down this dark alley, that guy looks shady, etc) but
much more than that and we suck.

I for one am deeply uncomfortable with people training AI systems to form the
kind of decisions we as humans do. These systems may have more information to
go on but if it's using the same methods that we do, how accurate can it be?
Thanks to the rapid advancement of the last 150 years we have quickly reached
a place where society's technological knowledge and ability far exceeds our
moral ability to reason about it, and it's implications. We are like an
untrained child with a gun. Yeah we may understand the mechanics behind
pulling the trigger and how it works but we don't yet have the maturity to
understand when we should, and what truly happens when we do.

~~~
fredfoobar42
There's this attitude that because computer work on pure logic, anything a
computer does must therefore be logical and rational. The problem is that the
humans who program the computers will often encode their own illogical,
irrational, biased, and flawed logic. The computer is not logical and
rational, it just follows instructions well. If the instructions are
illogical, irrational, biased, and flawed---but executable---the computer will
follow them. And when they spit out a illogical, irrational, biased, and
flawed result... well, it must be logical, rational, unbiased, and unflawed,
because the computer is purely logical, rational, unbiased, and unflawed.

We code our biases into the algorithms we use every day. They inherit our
flaws. The sooner we all wake up to this, the better.

~~~
gshubert17
> We code our biases into the algorithms we use every day. They inherit our
> flaws. The sooner we all wake up to this, the better.

And our values. And assumptions. In Arthur C. Clarke's "2001: A Space Odyssey"
the homicidal behavior of HAL was ultimately based on the conflict between his
secret instructions and what he was able to share with the human crew, Bowman
and Poole.

~~~
dcole2929
This is actually a serious concerns when it comes to AI. People tend to think
it's overblown but non deterministic behavior is bad because you don't know
what the program will do. Maybe it crashes, maybe nothing happens, maybe it
nukes your cpu. You simply don't know. The first rule of true AI has to be to
ensure it values human life. It would be grossly irresponsible to create an
entity more intelligent than us and do otherwise. But what happens when the
military inevitably decides these new AI things would make really good drone
pilots. Or when a sufficiently powerful enough AI comes to the conclusion that
the best way to protect human life is to take human life. What's the end
result of that conflict. What's stopping a sufficiently intelligent AI from
rewriting it's own code to get around restrictions it doesn't like. We already
have self modifying code. It's a scary thought, and the fact that people are
basing these things off the way humans think makes it even scarier

------
yummyfajitas
While I can certainly detect a bunch of content-free mood affiliation designed
to appeal to those who dislike technological liberationists, I can't tell what
the point of this article is.

First: _In the real world, this has led to a pathology where the tech sector
maximizes its own comfort. You don 't have to go far to see this. Hop on BART
after the conference and take a look at Oakland, or take a stroll through
downtown San Francisco..._

Second: _Techies will complain that trivial problems of life in the Bay Area
are hard because they involve politics. But they should involve politics._

So first he's criticizing techies for not fixing the pathologies of Oakland.
Then he's saying these problems are rightly the domain of politics. So what's
the problem? Those are political problems, should be left to politicians, and
techies are doing exactly that.

Finally: _In a world where everyone uses computers and software, we need to
exercise democratic control over that software._

Aha - so first techies are not solving problems that the author thinks they
should be leaving to politicians. From this he draws the conclusion that
techies - currently doing a great job of solving the problems within their
domain - should bow down and submit to control by the politicians who failed
to solve the local problems he cares about.

Another choice line: _If after decades we can 't improve quality of life in
places where the tech élite actually lives, why would we possibly make life
better anywhere else?_

Why would we do something that benefits far away humans over something that
benefits nearby humans? Don't help those foreigners (or people in NY)! Build a
wall! Trump 2016! (Same idea, just directed at a slightly different target.)

~~~
snowwrestler
Politics is far too important to be left to the politicians!

Politics is how we resolve disagreements, which is hard to do. Maciej argues
(rightly, IMO) that avoiding politics for technology is basically retreating
from a set of hard problems into a set of easier problems.

> From this he draws the conclusion that techies - currently doing a great job
> of solving the problems within their domain - should bow down and submit to
> control by the politicians who failed to solve the local problems he cares
> about.

He's saying that techies should directly engage in politics to achieve
positive outcomes locally, before believing that they can use technology alone
to achieve positive outcomes globally.

He's saying that what techies consider to be "problems within their domain" is
worth some introspection.

~~~
spaced_out
>He's saying that what techies consider to be "problems within their domain"
is worth some introspection.

But couldn't that lead to exactly the sort of arrogance that Maciej argues
against? That engineers think their ability to solve tech-related problems
means they are specially equipped to solve more complex problems in society?

For example, if some sociology major who had no programming experience tried
to advise me on a challenge I have with some software I'm working on, based on
something they read in Wired, I'm probably just going to roll my eyes.
Frankly, it's pretty unlikely that someone who no background in the field I've
dedicated my professional life to studying, can provide useful advice.

Likewise, do you think an engineer can provide useful advice on how to deal
with poverty to a sociologist, one who has spent their career studying this
issue? Do you really think the secret to improving society is to have a bunch
of techies telling sociologists and economists how they think things should be
run?

Maybe the right thing for techies is to pay their taxes, and participate in
local elections as informed voters. Other than that, maybe we should stick to
tech, and leave running society to the professionals.

------
chflags
"We started out collecting this information by accident, as part of our
project to automate everything, but soon realized that it had economic value."

Is he saying the end results of the project to [insert stated project purpose
here], e.g., "to automate everything", "to organize the world's information",
etc., did not have enough economic value to sustain the project... and hence
founders were "forced" to collect data _as a means to generate value_?

Here's another version: pre-Google search engines realized they could sell ad
space, i.e., paid placements, e.g., to auto manufacturers.

Once the advertising industry became involved, then collecting data about the
network's users, if one could do it, was a no-brainer.

"... really it's just the regular old world from before, with a bunch of
microphones and keyboards and flat screens sticking out of it."

Not sure that young people who worship Silicon Valley want to believe this,
and why should they?

The author always makes a good case for the potential long term consequences
of the so-called "changes to the world" that many programmers are adamantly
pursuing.

Maybe these programmers are not changing the world. Maybe they're just doing
what others already did in the past, on a smaller scale, without the benefit
of cheap electronics.

~~~
forgottenpass
_Is he saying the end results of the project [...] did not have enough
economic value to sustain the project... and hence founders were "forced" to
collect data as a means to generate value?_

I don't see any reason to read that into his speech. We're used to the "you
pay or are the product" dichotomy being propagandized at us as the reason for
accepting surveillance business models. I don't see that here. I see it more
like describing the realization that there is an(other) revenue stream staring
the creator in the face.

------
ArkyBeagle
This reminds me of the Adam Curtis film "All Cared For By Machines Of Loving
Grace." Indeed, "eat your vegetables" itself is as much an environmental idea
as a nutritional position. Curtis attributes environmentalism in its present
form to systems-think grounded in the computer industry.

The deep irony is that computers themselves arose from the ashes of Bertand
Russel's (failed) attempt to systemically automate mathematics.

I'm always amazed that techies, who have to deal with the fragility and
foibles of computers, aren't leading the charge to be skeptical of this sort
of thing. Of course, the incentives are not aligned with that.

------
dtornabene
what a shame noone is actually responding to the substantive points here. This
is without a doubt one of the better "programmer describes politics from a
programmers point of view" to turn up here.

------
TeMPOraL
I think Maciej's mind is getting seriously poisoned by the hatred towards SV.
I understand the anger - hell, I abhor the ad-driven tech too, I've stopped
believing in the startup marketing long time ago. But just because companies
say bullshit to make money doesn't mean the tech doesn't work, or that
technologists are _worse_ than normal people because they use their brains to
solve complex problems. This, and previous talks start to sound less like
pointing out existing problems and more like anti-intellectualism.

In Maciej's recent talks I feel a sense of defeat that he's trying to mask
with calls to actions and thinly-veiled insults towards programmers. Like the
constant references to politics - if there's one thing true in this world,
it's that the politics is one of the least cost-effective way of solving
_anything_ one can imagine. If people want to do tech, or science, instead of
politics, maybe it's not because they're stupid, but because they want to do
something _effective_? After all, it's not politicians who fed the world, it
was Haber & Bosch. It wasn't environmentalists who saved whales from
extinction, it was whoever invented that synthetic whale-oil substitute.

While I accept I might be a bit technophilic, I think some of Maciej's recent
comments border on ridiculousness. Criticizing not just Google per se, but the
very effort at curing death? It's not hubris. _It 's something we should have
been focused on doing for at least past few hundred years_. If it takes rich
SV guys to finally do something about it, I think instead calling them full of
hubris one should instead ask, why it was them who finally started tackling
this problem in a way that has at least a chance of being effective.

Another point - Maciej said that "really it's just the regular old world from
before, (...) [and] it has the same old problems.". Well, people don't change
(fast enough), their problems don't change. But the world looks _very_
different than it looked few hundred, or few thousand years ago. So what
changed, if not people?

I say, technology. Personally, I'm getting more and more convinced that social
changes are _caused by_ changes in technology landscape, not the other way
around. I.e. we didn't have liberal democracy 500 years ago not because we
were stupid then, but because technologies that can support it - like the
printing press, effective firearms, etc. - did not exist or did not yet reach
the point of ubiquity.

So yeah, daily bashing of advertisers and anti-social startups is fun. But
let's not pour the baby, the soup and the whole medicine cabinet out with the
bathwater.

~~~
laughinghan

        ... curing death? It's not hubris. It's something we should have been focused
        on doing for at least past few hundred years. If it takes rich SV guys to
        finally do something about it, I think instead calling them full of hubris one
        should instead ask, why it was them who finally started tackling this problem ...
    

I totally appreciate what you're saying. You're saying "the rich SV guys are
right, why haven't we been trying to cure death? Obviously curing death would
be amazing! Death is awful!".

But the fact that people whose expertise is primarily in software,
entrepreneurship, and/or investing, who all have, I think we can safely say,
_limited_ experience as biology researchers, the fact that these people are
making predictions about when death will be cured and judgements as to which
approaches are most promising, predictions and judgements that are not
deferred to _actual biologists who study senescence_...there are of course
upsides to such hubris (e.g. being unaffected by perverse incentives in the
academic system in which biologists traditionally work), but surely you can
see that there is hubris here?

In spite of all the caveats like how academia sucks, at the end of the day,
the _actual biologists who study senescence_ are still understood to be the
experts on the feasibility of immortality. Making any kind of prediction or
judgement call that isn't deferring to the acknowledged experts is
incontrovertibly arrogant.

To be clear, we wouldn't be leveling this criticism if "rich SV guys" were
simply increasing financial support for aging research. There are undoubtedly
controversies around actors raising money for medical research (e.g. because
it arguably results in disproportionate funding for diseases of old rich white
people), but nobody criticizes those celebrities of hubris, either.

This criticism is because there are people in Silicon Valley who earnestly,
genuinely believe that because they're hackers not biologists, they're gonna
cure death by hacking biology. Even if you believe they're right, how can you
deny the arrogance?

    
    
        ... finally started tackling this problem in a way that has at least a chance
        of being effective.
    

And this part is why this hubris is harmful: the dismissal of the efforts of
anyone other than the "rich SV guys" of even having "at least a chance of
being effective".

    
    
        politics is one of the least cost-effective way of solving *anything* one
        can imagine
    

Which way of solving institutionalized slavery, or women's suffrage, or
totalitarianism do you think would've been more "cost-effective" than
politics?

At least one of those was _directly caused_ by technology, and solved by
politics and its siblings, economics and war.

~~~
laughinghan
_I moved this out of the parent comment because I really want you to read the
parent comment and was worried about the wall-of-text; hopefully, you 'll be
interested enough to read the continuation, too._

    
    
        it's not politicians who fed the world, it was Haber & Bosch
    

Without diminishing their accomplishments, it's important to note that Haber &
Bosch have come and gone but world hunger _still isn 't solved_. And it will
never be solved by technology alone. It's going to require politics, and
economics, and unfortunately probably war. Hopefully technology can provide an
important supporting role, but the realization that technology will only be
providing a supporting role, that's exactly the humility I'm arguing for.

Also, in what way were Haber & Bosch representative of "rich SV guys" or
techie hubris? Maciej has never seemed to me to be criticizing mainstream
scientific research at all.

    
    
        It wasn't environmentalists who saved whales from extinction,
        it was whoever invented that synthetic whale-oil substitute.
        ...
        Personally, I'm getting more and more convinced that social
        changes are caused by changes in technology landscape, not the
        other way around.
    

Isn't synthetic whale-oil a clear counterexample to this causative claim?
Surely you agree that research into whale-oil substitute was driven at least
in part by environmentalism, it wasn't the invention of synthetic whale-oil
substitute that incited people into caring about the environment.

And again, in what way is the invention of whale-oil substitute representative
of the hubris that Maciej is criticizing?

    
    
        we didn't have liberal democracy 500 years ago not because we
        were stupid then, but because technologies that can support it
    

Didn't the Greeks have democracy 2500 years ago? The Roman Republic also had a
form of democracy, which lasted longer than the US is old, our technology may
well turn out not "support it" any better than theirs.

This reminds me of a quote from a talk I read recently:

    
    
        More broadly, we have to stop treating computer technology as something
        unprecedented in human history.
        ... let's at least learn from our past, so we can fail in interesting
        new ways, instead of failing in the same exasperating ways as last time.

------
pron
It is ironic that the people most qualified to grasp the problem of essential,
irreducible complexity are software developers. That many don't shows that
something very fundamental in the underpinning of computer science has been
lost:
[https://news.ycombinator.com/item?id=11932963](https://news.ycombinator.com/item?id=11932963)

------
marcosdumay
> There is also prior art in attempts at achieving immortality, limitless
> wealth, and Galactic domination.

Hum... Except for very primitive ones, based on religion, I actually can not
think of any such previous attempt. Does anybody knows what the author is
talking about?

~~~
maxerickson
I think the tone is a little sarcastic, if you attenuate the claims a little
bit, it would include projects like the Third Reich, the Soviet Union, etc.

~~~
marcosdumay
Ok, maybe, but those have no relation at all with creating technology that
will improve people's life. If this is what the author thinks as a joke, I
should grant less effort in understand his other points.

Before you pointed that, I was expecting them to somehow relate with the
Industrial Revolution and the exploitation fo the New World. Those are much
more appealing metaphors, but I couldn't get anything that we'd be refusing to
learn from them.

~~~
maxerickson
They were both about improving life through technology! It just happens they
missed other important perspectives. That's part of what the talk is about,
people with new found powers not sufficiently considering the consequences of
their actions.

I would also say that the World Wars in fact do relate to the industrial
revolution, as part of the reason they were so horrible was the new found
power provided by scientific industry.

------
stcredzero
_The reality is, opting out of surveillance capitalism means opting out of
much of modern life._

There is now a surveillance online politics built on top of "surveillance
capitalism."

------
guard-of-terra
Hidden in the article is an example how moralizing destroys soul:

> Abortion has been illegal in Poland for some time, but the governing party
> wants to tighten restrictions on abortion by investigating every miscarriage
> as a potential crime.

One day, you are "pro life", and the next day, you are happy to harrass people
in terrible life situation.

~~~
danharaj
The way I interpret these contradictions is that words in ideology are largely
instrumental. They achieve a goal that is seldom stated explicitly. That's how
yu get aporia like pro-life bombings of abortion clinics and people
promulgating family values that refuse to recognize certain families.

~~~
guard-of-terra
My point is that they pay with their soul (or you can call it "integrity"), up
front, and then maybe achieve a questionable goal.

~~~
droopyEyelids
Possibly.

An alternative explanation is that the stated goal (reducing abortion) wasn't
the actual goal.

For example, if controlling/subjugating women was the actual goal, then more
actions taken in the context of 'pro-life' activism makes sense.

~~~
guard-of-terra
"if controlling/subjugating women was the actual goal"

Then to begin with they had no soul.

~~~
webmaven
Please continue, you're on a roll.

~~~
gaius
Are you sure that he's not a troll?

~~~
webmaven
By no means. But then, I _was_ just out for a stroll.

------
unabridged
>How will more smarts give us peace in the Middle East?

Less religion (or at least less true believers). In addition to intelligence,
technology can also increase the comfort of everyday life. Comfort + rational
thinking = less likely to risk your life killing others.

~~~
fredfoobar42
You'd think that, but there's been a huge swath of engineering school
graduates joining ISIS and similar organizations. It's not that ISIS is
recruiting them, something about engineering students [makes them more likely
to join ISIS]([https://www.washingtonpost.com/news/monkey-
cage/wp/2015/11/1...](https://www.washingtonpost.com/news/monkey-
cage/wp/2015/11/17/this-is-the-group-thats-surprisingly-prone-to-violent-
extremism/))

~~~
rm_-rf_slash
The engineering school of the university I work at is plastered with posters
urging students it's not too late just make the call don't make the jump. If
it weren't so tragic it would be funny that it's the only college that seems
to have that problem

So I guess ISIS appeals to the ones who would rather go out with a bang.

~~~
mjevans
I was thinking about this the other night:

In the world of an RPG why does my character go off and become an Adventurer
instead of living a quiet, humble, out of the way life?

In that fantasy world the normal quality of life isn't that good, and if
you're just sitting around you're probably going to be picked off by some
a-hole before natural old age.

In that fantasy world if you go out and TRY to have adventure your hard work,
blood, and tears usually result in a better outcome. You did good, you get
good rewards.

I think the real reason our society feels so vulnerable to 'radicalization' is
that there is a distinct breakdown in 'the dream'; in the belief that you can
put forth work and actually be rewarded. Or even in the ability to take that
risk without starving/growing ill and ending up in a poverty spiral.

The people demand fulfillment, and meaningful lives; they want dignity and
respect for their contribution to the whole.

------
thenobsta
tldr: learn from history

Apropos as YC looks into giving everyone money and designing cities from first
principles.

~~~
tim333
Designing cities has a long history going back to about 4000 years ago in
Egypt. I imagine the YC folks will be aware.

Basic income less so though there are precedents in those with inherited
incomes and the landed gentry.

------
carapace
I just want to point out that "moral economy" is an oxymoron.

~~~
ArkyBeagle
Adam Smith at least wrote "Wealth of Nations" after writing "Theory of Moral
Sentiments" to drill down into the morality of money. His point in it is that
capitalism is a moral improvement on Mercantilism.

~~~
carapace
Yeah I think I failed to communicate my point.

In the Bhagavad Gita (chapter 2 verse 49) there's a statement to the effect
that those who work for the fruits of their effort are misers.

