
Dude, You Broke the Future (2017) - thanhhaimai
http://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html
======
jengleton
Robert Miles has an interesting response to the claim that corporations are
AIs:
[https://www.youtube.com/watch?v=L5pUA3LsEaw](https://www.youtube.com/watch?v=L5pUA3LsEaw)

To paraphrase: yes, corporations function as agents, but their maximum
performance is limited by the capabilities of their employees. A corporate AI
may have a much broader set of skills than any single person, and may be able
to tackle many concurrent tasks, but as an "intelligent agent" its decision
making capabilities probably don't scale exponentially (or even linearly) with
its headcount. In the sense that a corporation's maximum intelligence is
likely to be in the same ballpark as the smartest humans, it can't be seen as
a true superintelligence.

~~~
zaphar
I don't think the comparison was to AI as a super intelligence. AI doesn't
have to be a super intelligence to be AI or to be dangerous.

~~~
jengleton
That's fair. I interpreted the comparison as being between "AI implemented as
a property of human organizations" and "AI implemented as a powerful search
algorithm". While corporations can certainly be dangerous, they're still made
of people and thus are unlikely to want to do things like "convert the entire
mass of the solar system into dollar bills" (and even if they did, they'd have
a hard time doing it). A sufficiently powerful search algorithm would find all
sorts of bizarre ways of satisfying its goals. The point is that the scope of
the risk is quite different when talking about corporations (order of
magnitude: screwing up the environment in pursuit of easy profit) vs "real" AI
which, if granted agency, would have potentially unbounded risk.

Another way to look at it is with regards to capacity for self-modification.
If a corporation can't re-structure itself into being much smarter than the
smartest human, its intelligence is fundamentally limited, and therefore so is
the risk. Does software have this restriction? We don't really know yet, but
it's hard to point to exactly why it would.

~~~
lukev
How is "converting the entire mass of the solar system into dollar bills"
conceptually different from "converting the habitability of Earth into dollar
bills" or "converting the health problems of human beings into a maximal
amount of dollar bills", which is precisely what many corporations _actually
do_?

The "corporation as AI" metaphor isn't about some abstract future possibility,
it's an explanatory mechanism for how the world is so thoroughly messed up
_right now_.

~~~
jengleton
Yeah, I agree. My original post was bringing up what I thought was an
interesting comparison between hypothetical software AI and the corporations-
as-AI metaphor presented in the talk.

Corporations _are_ acting as misaligned optimizers. Solving that problem is
hugely important. However, the "AI" comparison breaks down somewhat when you
start thinking about how we might actually fix the problem. With corporations,
we (i.e., states) have tools that we can use to regulate bad actors. Software
AI, however hypothetical at the moment, seems likely to be a different game
all-together.

~~~
zaphar
I don't know that we won't have tools, albeit different ones, to regulate a
bad AI. AI need more than just intelligence and agency. They also need
effective ways to interact and affect their environment. That boundary is
where we are likely to develop tools to limit and regulate them.

If they are truly general AI then it's likely that their reaction to that
limitation and regulation will be not dissimilar to a person's but I see no
reason to assume that limiting them will be impossible.

~~~
jengleton
Sure! I don't see any reason why it would be impossible either, but the
(hypothetical) problems are very interesting. Starting with the most basic
problem of all: how do we even specify what we want the AI to do? The whole
field of AI safety is trying to figure out a way to write rules that an agent
wouldn't instantly try to circumvent, and to find some way to provide basic
guarantees about the behavior of a system that is incentivized to do bad
things (just like corporations are incentivized to find loopholes in the law,
hide their misdeeds, and maximize profits at the expense of the common good).

------
nemothekid
Comparing Corporation to Sci-Fi super AI's is missing the point I think. I
don't think the author literally believe corporations are AGIs, he starts with
the point that corporations are "old, slow AIs". The article starts from a
single argument - we have to be careful with AIs that are smarter than us,
because if we don't instruct them carefully, we will end up in a paperclip
maximizer situation. From there I take away 2 main points:

1\. We do a shit job today of controlling "old, slow" paperclip maximizers, so
there's no confidence we will ever be any better controlling an AI, despite
any good intentions.

2\. Our wild exotic ideas about paperclip maximizers probably won't come to
fruition and instead we will end up in a boring dystopia where AIs will
maximize time spent playing Farmville on Facebook.

~~~
aaron695
> I don't think the author literally believe corporations are AGIs

In the video questions section he confirms he means literal.

[https://www.youtube.com/watch?time_continue=3172&v=RmIgJ64z6...](https://www.youtube.com/watch?time_continue=3172&v=RmIgJ64z6Y4)

Look at HN as an AI, you can ask a very complex question and get a very
complex answer, it's AGI.

4chan can send humans out to do its real world physical bidding. It can
literally get people shot.

These are very complex AI's.

~~~
hoseja
There is a difference between AI and a meme I think. AI is understood to run
on computers, memes run on human brains. Corporations, nations, 4chan are
memes, not AI.

I just think this is a really neat idea.

~~~
kaibee
> There is a difference between AI and a meme I think.

Yes, I agree.

> AI is understood to run on computers, memes run on human brains.

This, I disagree with. I think the difference is something more like, "virus
vs bacteria/multicellular organism", however both run on human brains.

Memes are more virus-like. Small data-payloads, only goal is to replicate and
spread as fast as possible, lots of mutation.

This is not what corporations/nations are. They are far larger and more slow-
moving, much more complex, with various immune systems, defense mechanisms,
and goals.

Both only exist in/run on the "human brain" computational substrate, but the
"corporation" is a distributed AI while memes are just viruses spreading from
machine to machine. If all of humanity died tomorrow, Amazon and memes would
both cease to exist.

------
dang
If curious see also

2018
[https://news.ycombinator.com/item?id=16051337](https://news.ycombinator.com/item?id=16051337)

2017
[https://news.ycombinator.com/item?id=16032643](https://news.ycombinator.com/item?id=16032643)

------
hnick
For anyone with some hours to burn, there is an idle game that lets you play
the role of a paperclip maximiser.

[https://www.decisionproblem.com/paperclips/index2.html](https://www.decisionproblem.com/paperclips/index2.html)

I find idle games odd. My rational mind knows they are traps, but I still want
to see what is next. They tap into some kind of need for exploration,
progression, and learning. This one at least explores a few ideas and has a
set endpoint so I enjoyed going through it.

~~~
EricE
>I find idle games odd.

Dopamine traps. To bring back to the article, there's a decent ST:NG episode
where Wesley saves the day...

------
netcan
Reductio ad deus is a recurring pattern, or at least it was, when we look
hopefully into a radical future.

Radical political movements develop messianic themes, whether or not they
rejected pre-existing ideas of god. Modern futurists are distinctly messianic.
Remember also that (radical) 19th century politics kind of _were_ futurist
movements. "Singularity" is an on-the-nose example.

^Radical meaning "want or expect major societal change."

I think we're better at predicting the future than we give ourselves credit
for. We're just bad at distinguishing profound from banal in those futures.
100 years ago, economists and intellectuals (famously keynes) used their
projections of productivity, technology & such to predict a leisure society.
15hr workweeks, etc.

They were right about almost everything, except the conclusions. Productivity,
global trade, technology, even peace... eventually. Even with the benefit of
hindsight, very few modern economists reach profound conclusions about the
mistakes of their forebears.

The way we usually get the future wrong is " _you were right, an yet..._ "

The cultural element is the wild card. In 1990 you could have predicted 2020's
radically changed landscape of media, social media, communication technologies
& such. You probably couldn't have predicted the memetic influence on, the
economy, education, social life, etc... At least, people usually don't predict
these well.

------
ALittleLight
I found a couple errors early on that dissuaded me from continuing to read.

One was the idea that the author could rule out the singularity because he
wasn't aware of progress toward self-motivated AI that would be something like
our own intelligence. This seems like a limited view to me because we wouldn't
need self-motivation at all in AI to hit the singularity. Humans can supply
the self motivation.

Suppose I'm using GPT-4 or GPT-44 trained on the corpus from sci-hub and it
recommends experiments to me, or explains physics to me, etc. I could be the
self-motivating part and the AI could be the intelligence part, and it seems
we'd still hit the technological singularity.

Another problem I had was when the author characterized Elon Musk's
"obsession" with the paperclip maximizer and described Tesla as a battery
maximizer. It seems like the author kind of misses the point of the paperclip
thought experiment, which is, broadly, that an AI's interests might not be
aligned with our own and that misalignment may cause serious problems.

Tesla is clearly not a battery maximizer and it is clearly not a different
class of intelligence from the humans and corporations existing today (though
it may be towards the top of that class). Neither of those things would
necessarily be true of an AI.

Given the position of my scrollbar it seems I was only starting to read this
piece, but already finding what I think are significant problems as the author
sets up the argument, I'm hesitant to spend more reading.

~~~
LocalPCGuy
I found an error in the first sentence of your post that dissuaded me from
giving the rest of it much credence. Saying you didn't read it, but decided to
comment anyways.

~~~
ALittleLight
What error?

~~~
tomazio
That one went right over your head.

~~~
ALittleLight
No, I don't think so. I think my response went right over yours.

The comment is trying to mirror my criticism - "I read the first part, found
an error, and stopped". My response is trying to highlight that, in fact,
their response does not mirror mine because I had actual errors that I pointed
out that motivated me to stop reading, whereas that comment did not
(apparently).

In other words, if I had actually made substantive errors in my first sentence
or so, it might make sense to stop reading. I'd have already demonstrated that
my thinking wasn't very clear. If that was the case though, then it would be
an invalid criticism of my reasoning (read a bit, saw an error, stopped)
because that comment author would be following the same paradigm. On the other
hand, if I didn't actually make any substantive errors in the first sentence
or so my post, then the criticism is still invalid, because, while I actually
pointed out substantive errors in the OP, this comment doesn't point out
substantive errors in my comment.

~~~
LocalPCGuy
It was a joke mostly - but specifically, I find it funny that a portion was
read, and instead of just moving on, felt the need to poke at the article
without at least finishing it. That is the "error" \- who knows, maybe your
criticisms were addressed later on? We'll never know :P

(like I said, it was mostly in jest, so don't take it too seriously, please)

------
Animats
A point I've made on here before is the big near-term risk from AI is not
general-purpose artificial intelligence. It's machine learning systems that do
a better job of making corporate decisions than humans. Corporations are
shareholder value maximizers. There's a powerful school of thought, the
Chicago School, which claims that's all they should be, and have no other
responsibilities.

Machine learning systems are really good at maximizing some defined criterion.
It's quite possible that they might get good at making corporate decisions.
They already do that for some investment funds.

Once machine learning systems are better at corporate decision making than
humans, market forces will demand they be put in charge. The companies with
inferior human-based technology will start to lose out. That's implicit in the
forces behind capitalism.

Be afraid, CEOs. Be very afraid. The machines are coming to take your job.

(Somebody should turn this into a TED talk.)

~~~
koheripbal
> Corporations are shareholder value maximizers.

The logic goes that organizations, in general, should focus on what they are
good at, while the government should pass regulation to incentivize socially
good behaviors.

What we need to work on is having a stronger government that's less influenced
(monetarily) by corporations, to set boundaries, set incentives, and police
corporations to good behavior.

To be honest, we often highlight all the places this goes wrong, but all the
industries that don't get a lot of press are good examples of this working
well.

------
polynomial
> We're living in yesterday's future, and it's nothing like the speculations
> of our authors and film/TV producers.

Exception proves the rule: [http://www.openculture.com/2020/07/a-1947-french-
film-accura...](http://www.openculture.com/2020/07/a-1947-french-film-
accurately-predicted-our-21st-century-addiction-to-smartphones.html)

------
bawolff
> Nobody in 2007 was expecting a Nazi revival in 2017, right?

Is that actually true? I seem to remember people predicting that usa has been
edging towards facism for a while now, especially in the aftermath of 9/11.

For example here is an article from 2003 predicting usa is heading towards
fasicism [https://secularhumanism.org/2003/03/fascism-
anyone/](https://secularhumanism.org/2003/03/fascism-anyone/)

~~~
bitwize
Nobody among the intellectual class anticipated Trump's election in 2016. Much
like physicists hypothesized dark matter to account for the apparent greater
mass necessary to stabilize galaxies and such, the political intelligentsia
hypothesized an enormous, hidden body of "dark Nazis" to account for their
chosen candidate not winning the election. It is unfathomable to them that
enough people in the center, center-right, and right could be discontent
enough with being told whom to vote for and why someone running on a platform
of tighter immigration controls and more jobs for Americans is evil incarnate,
that they were willing to "hold their nose and vote for Trump", to swing the
election in his favor.

Political intellectuals tend to lean left, and whenever a rightist scores a
major political victory they start predicting stormtroopers goosestepping
through American streets Real Soon Now, going back to at least Reagan. So the
Nazi Revival is generally accepted as real, and if you contest the idea's
truth you may be considered one of them.

~~~
jrumbut
This is very untrue, even if we take nobody as a little overstatement, half of
the election prediction models had him above 10% chance of winning
(FiveThirtyEight had him at 29%,
[https://www.nytimes.com/interactive/2016/upshot/presidential...](https://www.nytimes.com/interactive/2016/upshot/presidential-
polls-forecast.html?_r=0#other-forecasts)).

WBUR, Boston's NPR news station, published in their blog "Cognoscenti" an
article titled "Why Donald Trump will Win in November" in May of 2016
([https://www.wbur.org/cognoscenti/2016/05/06/election2016-tru...](https://www.wbur.org/cognoscenti/2016/05/06/election2016-trump-
establishment-politics-tom-keane)). I can't think of anything more
representative of what people think of as the left leaning intellectual class
than a Boston NPR blog with a Latin title.

This idea that him winning in 2016 was inconceivable to media/intellectual
circles is a bit of revisionist history.

~~~
free_rms
Eh. Most of the mainstream media ran headlines predicting 99% chance of a
Hillary win and a few of them sniped at Nate Silver for giving Trump a 30%
chance (stupid tech bros).

A single contrarian take doesn't undermine that trend.

~~~
dragonwriter
> Most of the mainstream media

The mainstream media and the intellectual class have very little overlap.

~~~
free_rms
The intellectual class media I should say, then. NYT, Washington Post, etc.

------
anchpop
> And once you start probing the nether regions of transhumanist thought and
> run into concepts like Roko's Basilisk—by the way, any of you who didn't
> know about the Basilisk before are now doomed to an eternity in AI hell—you
> realize they've mangled it to match some of the nastiest ideas in
> Presybterian Protestantism.

I wish every discussion of transhumanism didn't have to involve Roko's
Basilisk. It's not something anyone takes seriously (and very few ever did),
but it has enough quirky weirdness that everyone seems to want to talk about
it.

Here's a quote from "Lesswrongwiki":

> Roko's argument was broadly rejected on Less Wrong, with commenters
> objecting that an agent like the one Roko was describing would have no real
> reason to follow through on its threat

> [...]

> Less Wrong's founder, Eliezer Yudkowsky, banned discussion of Roko's
> basilisk on the blog for several years as part of a general site policy
> against spreading potential information hazards. This had the opposite of
> its intended effect: a number of outside websites began sharing information
> about Roko's basilisk, as the ban attracted attention to this taboo topic.
> Websites like RationalWiki spread the assumption that Roko's basilisk had
> been banned because Less Wrong users accepted the argument; thus many
> criticisms of Less Wrong cite Roko's basilisk as evidence that the site's
> users have unconventional and wrong-headed beliefs.

\-
[https://wiki.lesswrong.com/wiki/Roko's_basilisk](https://wiki.lesswrong.com/wiki/Roko's_basilisk)

~~~
LiquidSky
Your comment (and your links) miss the point of mocking references to "Roko's
Basilisk". Whether it's actually a widely-held belief of the Less
Wrong/greater rationalist community is irrelevant, it's emblematic of the eye-
rolling nonsense that they engage in.

It's the exact secular equivalent of "How many angels can dance on the head of
a pin?"[1], which was also not an actual accepted topic of medieval religious
scholarly debate but an illustration of the usual absurdity of such debate
(e.g., there was an actual centuries-long scholarly discussion of whether
angels were sexless or had sexes).

[1][https://en.wikipedia.org/wiki/How_many_angels_can_dance_on_t...](https://en.wikipedia.org/wiki/How_many_angels_can_dance_on_the_head_of_a_pin%3F)

------
darepublic
When I predict what I'm going to accomplish in a given day I often get it
wrong. Even in my own life I have a hard time predicting what will come next

------
nojs
The comparison between corporations optimising for profit to algorithms
optimising for engagement, and the associated mispriced externalities in both
cases being similar, is interesting. But otherwise this reads as a cookie-
cutter rant about evil corporations and *ist AI without much deeper insight.

------
jariel
Corporations are nothing but a group of actors working together to achieve
some outcome, part of which may be profit, and it's absurd to think that they
are a 'Modern Era', or even Western concept as every single land-owning
farmer, artisan, banker, merchant in history was de-facto a form of
'corporation'. Moreover, many institutions with origins in antiquity (schools,
governments, NGO's) are close enough to the organising principles of
corporations that they could be regarded in the same, broad, social
categorisation.

His 18th century definition of 'Corporation' fits perfectly with the notion of
'Petit Bourgeois' of history - minor land-holding families, technical guilds,
larger artisanal groups. Do we think Athenian 'Trireme' warships were built by
a guy and his assistants? No, these were built by well organised groups with
specialisation, paid for their work, aka 'Corporations'.

The most interesting thing about these talks are actually social: what kinds
of ideas and memes will rise to the fore among people who are aspirationally
antagonising, like social anarchists (I don't mean that in a negative sense, I
mean that would be the closest technical 'ideal' that defines the a group like
'Chaos Computer Club'). Mohawks, anti-estabishment statements such as 'ardent
atheist', and hints of softly anti-corporate ideals.

"Life-span of corporation is shorter, largely due to predation (totally
unsubstantiated), corporation's are _cannibals_ , they eat one another (that's
an interesting way to describe mergers)."

"For the first century and a 1/2 they depended entirely on human employees" \-
no, actually, the Industrial Revolution was literally about harnessing the
power of fossil fuels via machines to _automate_ that which would have been
done by people (or animals) before. Neither humans nor horses moved those
trains.

And then of course the dehumanising of governments and corporations as AI:
"What do our AI Overlords want?".

I don't believe there is really any relationship between 'corporations' and
'AI', it's a neat idea. What we have here is an intelligent, creative guy with
an antagonists worldview, working in a field wherein he's free to make up
loose associations and hint at them as they are facts, ad he's subsumed an
interesting ideal from CS into his own world view.

I mean, it's great to try ideas that bends our minds a little bit, but I think
it's clear he's a writer of fiction.

~~~
einpoklum
Corporations are literally the something-extra over a "group of actors working
together".

~~~
pdonis
No, they aren't. Corporations don't magically do things that the group of
actors working together don't do.

~~~
dodobirdlord
They literally do. Plenty of legal and regulatory constructs apply only to
corporations. Limited liability is a thing that is available by forming a
corporation, as is carrying out an SEC filing and selling stock. A corporation
can also survive, as a legal construct, the deaths of all founding members.

~~~
jariel
Those constructs are part of the modern, legal features of corporations, but
they are not the essential nature of what corporations are.

Remove those features, and things would change a lot, but you'd still have
'corporations' of a kind.

Corporations are also not primarily profit driven - the owners may are - but
corporations themselves 'do things' which will result in a lot of
externalities and surpluses generated elsewhere, only some of the profits may
come back to the shareholders.

Corporations are:

Shareholders, Debtors, Buyers, Suppliers, Executives, Employees.

Shareholders may very well be the smallest beneficiaries of an endeavour. They
have certain rights, but other groups have rights as well: lenders have first
rights to the assets, and so do other creditors such as suppliers. Employees
have legal rights including collective bargaining.

Buyers may have incredible power over companies such that they suck out all of
the profits (see: selling to Apple).

Debtors have all of the power during restructring.

Many companies exist at the whim of the employees - like big Auto, who pay
super high wages and benefits relative to the job. Possibly government
employees as well.

Some Execs, by virtue of a weak or allied Board, have all the power and suck
out vast profits that would otherwise go to investors.

~~~
einpoklum
> Those constructs are part of the modern, legal features of corporations, but
> they are not the essential nature of what corporations are.

Maybe, but the corporations of concern to the public in most countries today
are those with the problematic "modern legal features".

------
pronoiac
(2017)

------
coldtea
Not so sure about his future prediction ability. This, for example, like most
"the sky is falling", didn't age well:

[https://www.antipope.org/charlie/blog-static/2018/07/that-
si...](https://www.antipope.org/charlie/blog-static/2018/07/that-sinking-
feeling.html)

~~~
klenwell
It looks to me like most Stross's predictions depend first on the event of the
Brexit, which has not occurred yet as far as I know.

Here's a map the NY Times published on 6/15/2005:

[http://graphics8.nytimes.com/images/2005/06/15/business/arm3...](http://graphics8.nytimes.com/images/2005/06/15/business/arm3.gif)

It makes an implicit prediction about the then booming housing market and the
sky above it.

On that date, the S&P 500 closed at 1,206.58.

Two years later, in 2007, the S&P 500 closed at 1,522.97.

To the extent that stock markets tell us where we stand as a nation or a
society, I find this example instructive.

There's such a thing as the long view. It's hard to know how long it should
be. Harder to know whether it will be proved out in the end until it is proved
or disproved. Even then sometimes we miss the forest for the trees.

S&P Data:
[https://finance.yahoo.com/quote/%5EGSPC/history?period1=1118...](https://finance.yahoo.com/quote/%5EGSPC/history?period1=1118793600&period2=1181865600&interval=1d&filter=history&frequency=1d)

------
Ygg2
> nastiest ideas in Presybterian Protestantism.

And if you follow Charlie's reasoning, that simpler parts, can't lead to a
highly advanced intelligence, you come to conclusion intelligence is derived
from a divine soul.

Intelligence isn't as uncommon as we think, but it takes long time to emerge.

~~~
hwillis
> if you follow Charlie's reasoning, that simpler parts, can't lead to a
> highly advanced intelligence, you come to conclusion intelligence is derived
> from a divine soul.

Thats an absolutely ridiculous interpretation. Asserting that our current
"simple parts" are still very far from AGI is nowhere even close to asserting
that intelligence is immaterial.

Its very, very obvious that there are myriad possible leaps we might need to
get closer to intelligence. You can't build an attention network from a
perceptron. Its totally reasonable to say that we need a few more fundamental
discoveries first. Obvious, even, given the insane amount of processing
required to teach a network a language. Brains are clearly wired in a way that
is simply more efficient than how we currently know how to build stuff.

~~~
Ygg2
> Thats an absolutely ridiculous interpretation. Asserting that our current
> "simple parts" are still very far from AGI is nowhere even close to
> asserting that intelligence is immaterial.

Not really what it says. Look at following passage:

> AI singularity as a narrative, and identify the numerous places in the story
> where the phrase "... and then a miracle happens" occurs, it becomes
> apparent pretty quickly that they've reinvented Christianity.

The phrase "miracle happens" , to me is suspect. Intelligence rose many times
in various creatures. There is nothing miracle about something that rose
multiple number of times.

Are Singularists wrong? Yes. They confuse saturation for exponential curves;
current neural networks are far cry from actual neuron networks, and their
time scales reflect more their fear of death than any sensible timeline.

If Charlie wants to criticize Singularists, there are plenty of valid reasons.
Them being cult like is the least important one.

~~~
hwillis
When I say that he's not saying that "intelligence is immaterial" I mean
literally that he is not saying intelligence does not arise from material
processes.

Equivalently, when he says a miracle happens, it is not the same thing as
saying "miraculous intervention is required to endow programs with
intelligence". He is saying that current and near-term machine learning
techniques are not capable of scaling exponentially self-accelerating,
godlike, incomprehensible _superintelligence_ , not that _no program ever_ can
reach intelligence.

> There is nothing miracle about something that rose multiple number of times.

It is ridiculous to imply he believes that based on his statements or even
just that passage in isolation. That would ignore -and take for granted as
true- the many assumptions required for a singularity-like event _besides_ the
ability to create artificial intelligence. These include:

1\. That our techniques are anywhere near reaching the general intelligence of
a human

2.That that intelligence can be capably run on existing hardware, which
presumes cognition does not rely significantly on processes happening within
neurons, only between them

3\. That the intelligence of a human, given the understanding of itself and
ability to modify itself, would be capable of making improvements that would
compound to a significant degree; if you presume intelligence takes off
exponentially then unless you're already smart its hard to add much more

4\. That intelligence _isn 't_ effectively limited by single threaded
performance

5\. That the ability to think can unlock all the wonders of the universe, and
simply being sufficiently smart will allow you to infer the tremendous amounts
of hidden state and randomness that dictate life.

If any of those _extremely reasonable_ things fail, the singularity takes
millions of years or is simply impossible. Expecting the singularity within
the next millenia isn't just faith in all the above, it's faith that all the
above is so true that the process happens in less than a decade. It is
fanatical.

------
strken
One of the biggest problems with thought experiments designed as an
introduction to an entire class of problems, is that they get misinterpreted
as a complete statement of the problem.

Charlie Stross is quite dismissive of paperclip maximisers: Elon Musk "has an
obsessive fear" of them, and "isn't paying enough attention" because Tesla
does the same thing. He refers to a "pure paperclip maximiser" and discusses
the "naive vision of a paperclip maximiser".

This is quite insulting, because the paperclip maximiser is a thought
experiment designed to introduce the consequences of intelligences which have
fundamentally different values to human beings, in an accessible way. What
he's doing is like reading a kids' book on counting and then writing an
article contrasting naive apple-based addition with his brilliant new idea of
generic fruit-based counting.

------
autokad
>"Science fiction is written by people embedded within a society with
expectations and political assumptions that bias us towards looking at the
shiny surface of new technologies rather than asking how human beings will use
them, and to taking narratives of progress at face value rather than asking
what hidden agenda they serve."

And be prepared for the most agenda ridden text you have read in a long time.

~~~
loa_in_
That's absolutely not what I associate science fiction with. Stanisław Lem for
example couldn't be further from taking shiny technology at face value

~~~
aasasd
Lem was disgusted at contemporary Western science fiction for the exact reason
of it being techno-fetishistic. He was in opposition to sci-fi mainstream, not
in it.

