
Dude, you broke the future - cstross
http://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html
======
Animats
I agree with a lot of that talk. What's getting us now is that corporations
are getting better at maximizing shareholder value. For a long time, big
companies were relatively static entities. They got into some line of
business, built the necessary factories or railroad tracks or stores, and
mostly stayed there. Now, some of them pivot into new businesses with agility
and take over. All the "tech" businesses use roughly the same infrastructure,
which you can rent from Amazon. So agility is cheap.

Another point is that companies can be bigger now. There are scaling problems
with heavy industry. General Motors ran into scaling problems decades ago -
they were too big to get out of their own way, and management couldn't keep
track of what was happening in distant plants. Google and Apple don't seem to
have that problem. Neither does Maersk, the world's largest shipping line. Or
DHL or FedEx or UPS or Amazon or WalMart or McDonalds.

Industrial civilization is even younger than Stross says. Less than 200 years.
A good start date is the opening of the Liverpool and Manchester Railway
Railway, 1830. For the first time, anyone could buy a ticket on a scheduled
train and go someplace. This is when the industrial revolution got out of beta
and started to scale. There were steam engines for a century before that, but
they were isolated one-offs. The Manchester and Liverpool had many engines,
double track, stations, timetables, signals, and paying customers. That's when
things really started to move. Quite literally.

~~~
labster
Charlie's not dating from the start of industrial civilization. He's dating
from the creation of the first artificial intelligences, which is at least a
century earlier. I think it's safe to say that these AIs' control of capital
is largely responsible for the fast spread of industrialization, so that they
could optimize for their purposes better.

------
scotty79
Observation that corporations are already AIs and one of the worst kind
(multiple, warring, environment altering, driven by survival) is interesting.

I'd say that there is whole another kingdom of AIs that are apex predators.
Armies. Corporations have nothing on them. Corp AIs politely pay taxes to
armies and they use the resources to develop and restrict access to most
dangerous tech.

We already had two top tier AI wars. And one cold AI war that almost ended
everything. What saved us was apex AI survival instinct.

What corporations do is second tier stuff. They are herbivores of this
ecosystem.

We are the plants.

Op's fear about corporations feasting on our attention falls flat on me.

I'm more afraid to become useless to corporations and by proxy to armies that
being something they need for their survival.

If facebook needs me despite the fact I don't work for it and don't buy any of
it's products is wonderful news for me, because it means I'll survive longer.

~~~
wwweston
> I'd say that there is whole another kingdom of AIs that are apex predators.
> Armies.

States, more explicitly.

The insight that human organizations can be seen as superorganisms (and
perhaps superintelligences) is right, and applies to states, businesses,
municipalities, churches, etc.

But the whole point of the insight isn't to be generally suspicious of these
semi-alien systems that are generally more powerful than individual humans.
It's to think more carefully about how to program and limit them, just like
with AIs.

One of the nice thing about at least one of the two state AIs in that cold war
you were talking about is that people did/do think about this and we've ended
up with a system fairly-well designed to operate with broad user input. It's
possible it could use some improvement but it's probably better than a broad
segment of constituent humans is prepared to take advantage of.

Business AIs are generally pretty good at taking advantage of that, though. In
fact, there's a credible argument they're exploiting features of the system to
heavily influence the first tier.

~~~
hyperion2010
Thomas Hobbes Leviathan is the definitive work on this [0]. Modern nation
states are living gods. For better or for worse, they are largely unconscious
in their actions and relatively slow to move. When a state level actor
achieves agency on timescales similar to individual humans, I think all we can
do is hope that whatever process brought it into being has endowed it with a
love of all living beings and a propensity to let the living expire at their
natural time.

0\.
[https://en.wikipedia.org/wiki/Leviathan_(book)](https://en.wikipedia.org/wiki/Leviathan_\(book\))

------
nod
(porting my comment over from my dupe)

Corporations are optimization processes. Advertising is an optimization
process. Social networks are. Smartphone addiction. Political polarization.
Television, mobile games, and most forms of entertainment. This prevalence,
and the fact that it's not EVIL doing it but just amoral goal-directed
processes, seems to me to be the key to recognizing, fighting back, and fixing
society.

We have to figure out some way to fight for our human values, against these
optimization processes. I don't think Stross has (or claims to have) a strong
answer there... any ideas?

~~~
leggomylibro
I have an idea, just no concrete ways to act on it. I think that we need to
make a system of incentives where the values which we care about are the ones
being optimized for.

So, right now we use abstract metrics like GDP or stock indices as a metric
for 'success'. So countries optimize for GDP and companies optimize for making
money and delivering monetary value to their shareholders.

Maybe we could collectively view those dollar figures as value-neutral and use
metrics like international education scores, life expectancy, access to
quality nutrition and medical care, incarceration/recidivism rates, etc. to
indicate 'success'.

But again, no concrete ways to act on that. What, should we start telling
people that money is useless and doesn't matter? I don't think they'll believe
us while so many don't have access to apirational opportunities, or even basic
necessities. But don't we have the means to provide those things?

~~~
guscost
There's risk that any of those metrics could be compromised by Goodhart's law:
"When a measure becomes a target, it ceases to be a good measure."

Obvious counterpoint: Money is also a metric susceptible to Goodhart's law. Is
it better or worse if the metric is useless _except_ as a target?

~~~
WorldMaker
Another counterpart is that not everything can be made a metric no matter how
hard you try. The advertising world has duped us into believing that we can
make metrics out of exceedingly subjective things like "satisfaction" and
"happiness", but just because you ask a dozen people to pick a random number
between 0 and 5 on any subject you think you can come up with doesn't mean
that a 3.76 average means anything at all in the real world.

------
lmm
There are specific arguments as to why we should expect AIs to go from
ridiculous to dangerous very quickly. You don't have to agree with them, but
it's worth acknowledging that Musk et al do have a rationale behind their
worries, one that doesn't apply to corporations etc.

Targeted advertising can be mutually beneficial. Of course corporations are
incentivised to promote it beyond the point, to the point where it's actively
harmful. But they're not the only game in town; countermeasures do exist and
are practical (adblocking or simply ignoring ads). More generally, fun is
inherently addictive, and the author isn't going to convince me to try to bar
people from doing things they enjoy without drawing a more principled line
between the two.

Likewise "propaganda", and in any case politicians have been lying for longer
than anyone can remember, and our society has thrived; indeed political speech
gets stronger protection than other forms of speech, which translates into
free rein to stretch the truth further than an advertiser or regular person
would be permitted to. Humans can learn to ignore hearsay and rhetoric and pay
attention only to a politician's written manifesto. I'd argue we should've
done so years ago.

Video can now be faked, sure. But it's always been cheap and easy to fake
quotes, and again, this is something we've dealt with.

Mob violence apps... eh. I don't really buy that people can be "nudged" into
the kind of huge behavioural change that getting involved in violence would be
for most ordinary, peaceful people. Assuming of course we maintain strong
social norms against violence. But who would be so foolish as to try to
dismantle those?

~~~
nradov
I do not acknowledge that Musk et al have a rationale behind their worries. It
is uninformed speculation. Right now we have no plausible path toward building
an AGI. There's no cause for worry until someone builds a computer at least as
smart as a mouse.

~~~
lmm
What rank is the best go-playing mouse?

~~~
nradov
How well can AlphaGo navigate a complex social hierarchy with no clear rules?

~~~
lmm
How confident are you that that skill is required for wiping out humanity?

------
HONEST_ANNIE
Four Sci-Fi books that get projections right:

* 'Paris in the Twentieth Century' by Jules Verne

* 'Brave New World' by Aldous Huxley

* '1984' by George Orwell

* 'The Space Merchants' by Frederik Pohl and Cyril M. Kornbluth (1952)

Verne predicts correctly many technologies and changes they cause in the
society. Probably the most accurate future projection overall.

The three other books: The Brave New World, 1984 and The Space Merchants
neatly complement each other. Our reality is a mix of the three themes
represented in these books.

Some people think that 'Stand on Zanzibar' should be mentioned.

~~~
romwell
"A Logic Named Joe" has predicted the Internet and many of our current
problems in the mid-40's:

[https://en.wikipedia.org/wiki/A_Logic_Named_Joe](https://en.wikipedia.org/wiki/A_Logic_Named_Joe)

[https://www.theregister.co.uk/2016/03/19/a_logic_named_joe/](https://www.theregister.co.uk/2016/03/19/a_logic_named_joe/)

~~~
Rapzid
One of my favorite "X Minus One" episodes:
[https://archive.org/details/OTRR_X_Minus_One_Singles/XMinusO...](https://archive.org/details/OTRR_X_Minus_One_Singles/XMinusOne55-12-28031ALogicNamedJoe.mp3)
.

------
Fricken
Can anybody cite prior art on the notion that corporations are paperclip
maximizes? It's not really a profound idea, I don't think, but the notion came
to me about a month ago, and shortly after SciFi author Ted Chiang[1] said the
same thing on BuzzFeed, and here's Charlie Stross saying it. Is this an
example of simultaneous invention, or has it been kicking around for a while,
and I just haven't noticed?

[1][https://www.buzzfeed.com/tedchiang/the-real-danger-to-
civili...](https://www.buzzfeed.com/tedchiang/the-real-danger-to-civilization-
isnt-ai-its-runaway?utm_term=.ie2Dm2ORv#.do2VP2pAz)

~~~
mortenjorck
The earliest I can recall being exposed to the concept was this 2013 blog
article: [http://omniorthogonal.blogspot.com/2013/02/hostile-ai-
youre-...](http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-
soaking-in-it.html)

------
wallacoloo
I think perhaps it's worth thinking about the view of natural selection which
historically has created a system of participants programmed to maximize their
share of the gene pool. I think it's at least a passable analogue for a system
of corporations that are each designed to maximize profit.

The interesting bit is that what _appear_ to be rigid "paperclip maximizers"
(I love this term) evolve secondary behaviors. Emotions, religions, _moral
codes_ , and government. This leads to the question that if the system of
corporations is a secondary behavior from the natural selection pressures
applied to humans, while simultaneously being an analogue of that very system,
will we see similar tertiary behaviors emerge from this system? Will
corporations develop religion, moral codes and governance?

One can arguably point to standards bodies as a form of governance (more
nebulously, one could suggest that corporations have repurposed human
governance systems as a tool for governance that serves corporations). My best
way of framing moral codes are as being comprised of a behavior that in
isolation gives the self _less_ advantage, but through cooperation becomes
mutually beneficial to the participants - something that prevents harm of the
self by not harming others. This sounds a lot like what we would label _"
anticompetitive behavior"_!

I had a lot of fun following this train of thought, but the analogue seems to
get extremely flimsy beyond this point. Morals are relative, they evolve as
society changes, and the whole process seems much more accelerated when it
comes to corporations that predicting anything or providing suggestions
through this approach seems silly. I still think it's a fun analogue though,
and it does make me curious what sorts of unintuitive behaviors a corporate
world could give rise to in the future.

Excellent speech - both content and delivery (through the video).

~~~
mvindahl
> Will corporations develop religion

According to the Civ1 tech graph, which I consider the authoritative resource,
they just need to develop philosophy and writing in order to research
religion. They definitely got the writing part nailed. Also they are dabbling
with philosophy. I think they'll make it.

On a slightly more serious note, some people (yep, I have read Sapiens) would
argue that religions, ideologies, and corporations are all variations of the
concept of shared belief systems. That is, the human ability to collectively
make stuff up and pretend that it's real. Thus, corporations and religions are
more or less the same thing. It's an interesting way to look at it IMHO.

------
leipert
Charles Stross read the text at the 34C3. If you want to listen to it rather
than read it:

[1]:
[https://www.youtube.com/watch?v=RmIgJ64z6Y4](https://www.youtube.com/watch?v=RmIgJ64z6Y4)

[2]:
[https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future](https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future)

EDIT: Sorry, the read mode on mobile did not show the embedded video.

------
VintageCool
> We science fiction writers tend to treat history as a giant toy chest to
> raid whenever we feel like telling a story. With a little bit of history
> it's really easy to whip up an entertaining yarn about a galactic empire
> that mirrors the development and decline of the Hapsburg Empire

It is really fun, while reading a science fiction or fantasy book, to
recognize the history that is being plundered for the story, and then be able
to predict the next few plot developments.

------
vadimberman
Why stop at "corporations"? How about any sufficiently large social group?

------
meri_dian
What a distorted view of the future. Distorted by negativity fixated on events
and possibilities that are blown far out of proportion and context, that
ignores the tremendous progress and promise in the world of the present and
the future.

Cynicism is such a fashionable attitude these days. It's a strain of cynicism
about our institutions - both public and private - that has been present in
western culture for a very long time. A healthy amount of cynicism is good,
because it itself can be a source of progress, but too much cynicism produces
an unrealistic and clouded outlook.

~~~
Apocryphon
What's your optimistic take on the trends he's talking about?

------
Radim
_" That mistake was to fund the build-out of the public world wide web—as
opposed to the earlier, government-funded corporate and academic internet—by
monetizing eyeballs via advertising revenue."_

TL;DR: "Dude" discovered the universe likes to optimize, and identifies top-
down regulation as the countermeasure. Political plugs abound ("Nazis took
over the US").

Previous discussion:
[https://news.ycombinator.com/item?id=16032643](https://news.ycombinator.com/item?id=16032643)

~~~
meri_dian
It's as if ads are some terrible injustice brought down upon the weak and
helpless masses...

I really don't mind them.

~~~
chimprich
> I really don't mind them.

I understood Stross's point when talking about ads isn't that ads are bad per
se (although they're pretty bad - do you use an ad-blocker?) but that they
come along with a lot of unwanted other stuff because the entities serving ads
are set up as increasingly efficient advertising machines.

For example, Facebook (by design) becoming ever-increasingly demanding of
attention by producing an endless stream of new notifications and features to
increase capture your time spent on the platform. This develops into an
attritional battle with the user for supremacy rather than a subservient tool
to improve their lives.

------
amelius
I'd like to see more simulation in future prediction. Or a bracketing of
worst/best cases based on different scenarios.

------
squozzer
I found this piece the most interesting --

>Or imagine you're male and gay, and the "God Hates Fags" crowd has invented a
100% reliable Gaydar app

For mobs to act, it doesn't need to be 100% reliable, or even 25%, so long as
the mob gets their kicks.

Just as likely, given the average political outlook of the app dev community,
is a "Payback the NRA" app that lets users target gun owners for liquidation.
Something tells me the author wouldn't mind that app so much.

------
ahmedalsudani
Previous discussion on the video:
[https://news.ycombinator.com/item?id=16032643](https://news.ycombinator.com/item?id=16032643)

~~~
sus_007
Dude, you broke the future 2x.

