Hacker News new | comments | show | ask | jobs | submit login
Dude, you broke the future (antipope.org)
267 points by cstross 9 months ago | hide | past | web | favorite | 63 comments



I agree with a lot of that talk. What's getting us now is that corporations are getting better at maximizing shareholder value. For a long time, big companies were relatively static entities. They got into some line of business, built the necessary factories or railroad tracks or stores, and mostly stayed there. Now, some of them pivot into new businesses with agility and take over. All the "tech" businesses use roughly the same infrastructure, which you can rent from Amazon. So agility is cheap.

Another point is that companies can be bigger now. There are scaling problems with heavy industry. General Motors ran into scaling problems decades ago - they were too big to get out of their own way, and management couldn't keep track of what was happening in distant plants. Google and Apple don't seem to have that problem. Neither does Maersk, the world's largest shipping line. Or DHL or FedEx or UPS or Amazon or WalMart or McDonalds.

Industrial civilization is even younger than Stross says. Less than 200 years. A good start date is the opening of the Liverpool and Manchester Railway Railway, 1830. For the first time, anyone could buy a ticket on a scheduled train and go someplace. This is when the industrial revolution got out of beta and started to scale. There were steam engines for a century before that, but they were isolated one-offs. The Manchester and Liverpool had many engines, double track, stations, timetables, signals, and paying customers. That's when things really started to move. Quite literally.


Charlie's not dating from the start of industrial civilization. He's dating from the creation of the first artificial intelligences, which is at least a century earlier. I think it's safe to say that these AIs' control of capital is largely responsible for the fast spread of industrialization, so that they could optimize for their purposes better.


Observation that corporations are already AIs and one of the worst kind (multiple, warring, environment altering, driven by survival) is interesting.

I'd say that there is whole another kingdom of AIs that are apex predators. Armies. Corporations have nothing on them. Corp AIs politely pay taxes to armies and they use the resources to develop and restrict access to most dangerous tech.

We already had two top tier AI wars. And one cold AI war that almost ended everything. What saved us was apex AI survival instinct.

What corporations do is second tier stuff. They are herbivores of this ecosystem.

We are the plants.

Op's fear about corporations feasting on our attention falls flat on me.

I'm more afraid to become useless to corporations and by proxy to armies that being something they need for their survival.

If facebook needs me despite the fact I don't work for it and don't buy any of it's products is wonderful news for me, because it means I'll survive longer.


> I'd say that there is whole another kingdom of AIs that are apex predators. Armies.

States, more explicitly.

The insight that human organizations can be seen as superorganisms (and perhaps superintelligences) is right, and applies to states, businesses, municipalities, churches, etc.

But the whole point of the insight isn't to be generally suspicious of these semi-alien systems that are generally more powerful than individual humans. It's to think more carefully about how to program and limit them, just like with AIs.

One of the nice thing about at least one of the two state AIs in that cold war you were talking about is that people did/do think about this and we've ended up with a system fairly-well designed to operate with broad user input. It's possible it could use some improvement but it's probably better than a broad segment of constituent humans is prepared to take advantage of.

Business AIs are generally pretty good at taking advantage of that, though. In fact, there's a credible argument they're exploiting features of the system to heavily influence the first tier.


Thomas Hobbes Leviathan is the definitive work on this [0]. Modern nation states are living gods. For better or for worse, they are largely unconscious in their actions and relatively slow to move. When a state level actor achieves agency on timescales similar to individual humans, I think all we can do is hope that whatever process brought it into being has endowed it with a love of all living beings and a propensity to let the living expire at their natural time.

0. https://en.wikipedia.org/wiki/Leviathan_(book)


> States, more explicitly.

Not quite. States are just facades for armies. When state fails to collect taxes for the army it is simply replaced.

I'm not so sure that peace between top AIs comes from user input. IIWW made armies realise they can't win a fight and they suffer trying. Thanks to that expeirience cold war was won by waiting opponent out. Humans in all of this were just cogs not the users.


I feel like Stross' observation applies here though: Stross maintains that before the advent of the corporation (AI) armies weren't so strongly self-regulating. The wars to which you are referring are after the corporations started to eat governments (and thus governments became corporations, to one extent or another, mostly by being built by and from corporations).

From that perspective the Eisenhower Warning was at least a half-century or so too late; the military-industrial process built the modern army (not vice versa) in its AI paperclip-maximizing.


(porting my comment over from my dupe)

Corporations are optimization processes. Advertising is an optimization process. Social networks are. Smartphone addiction. Political polarization. Television, mobile games, and most forms of entertainment. This prevalence, and the fact that it's not EVIL doing it but just amoral goal-directed processes, seems to me to be the key to recognizing, fighting back, and fixing society.

We have to figure out some way to fight for our human values, against these optimization processes. I don't think Stross has (or claims to have) a strong answer there... any ideas?


I have an idea, just no concrete ways to act on it. I think that we need to make a system of incentives where the values which we care about are the ones being optimized for.

So, right now we use abstract metrics like GDP or stock indices as a metric for 'success'. So countries optimize for GDP and companies optimize for making money and delivering monetary value to their shareholders.

Maybe we could collectively view those dollar figures as value-neutral and use metrics like international education scores, life expectancy, access to quality nutrition and medical care, incarceration/recidivism rates, etc. to indicate 'success'.

But again, no concrete ways to act on that. What, should we start telling people that money is useless and doesn't matter? I don't think they'll believe us while so many don't have access to apirational opportunities, or even basic necessities. But don't we have the means to provide those things?


Just FYI, you're describing socialism and there is a large body of theory that you can read about that discusses these ideas. It even takes into account the mistakes of the mid-20th century.

Here's a really nice introduction.

https://www.worldsocialism.org/spgb/pamphlets/capitalism-soc...


There's risk that any of those metrics could be compromised by Goodhart's law: "When a measure becomes a target, it ceases to be a good measure."

Obvious counterpoint: Money is also a metric susceptible to Goodhart's law. Is it better or worse if the metric is useless except as a target?


Another counterpart is that not everything can be made a metric no matter how hard you try. The advertising world has duped us into believing that we can make metrics out of exceedingly subjective things like "satisfaction" and "happiness", but just because you ask a dozen people to pick a random number between 0 and 5 on any subject you think you can come up with doesn't mean that a 3.76 average means anything at all in the real world.


It is possible that what we are seeing is just what you suggest, that people are fighting back and trying to fix (as they see it) society. Perhaps this is what participatory democracy looks like when the publishing tools are all very accessible; this is just what it looks like when you make the presses cheap enough.

It's possible that passion is now the main resource needed to get your political message out, and radicals tend to be more passionate than the rest of us.

Of course, to moderates, this looks terrible. I consider myself a moderate (or at least I did before the last election; the overton window, it would seem, has moved) and so I personally think this looks terrible, and it's not the most likely possibility, in my opinion. But you could construct a self-consistent narrative in which this was simply democracy in action; it matches much of the available evidence.

The big hole in this explanation is the evidence of foreign powers buying advertising to prop up our radicals. My understanding is that the investigation in that direction is still ongoing... but even then, the idea that a (much poorer) nation could buy our election would be in line with what I suggested about the presses (or at least the opinion-shaping part of the presses) being much cheaper than expected.


Bret Weinstein's take on it (on Joe Rogan's podcast): https://www.youtube.com/watch?v=LzAgSp_O03I#t=1h05m40s (I recommend starting 5-10 minutes earlier, but the linked starting time is where he really gets to the heart of the matter.)

The gist of it is that he believes markets are good at solving a particular problem, given material constraints and demand, but that markets are awful at determining what problems to solve. There are relatively weak and few incentives and punishments for corporations (other than for things that are outright illegal), so the market is approximately solving whatever problems have the lowest barriers to entry. And a website you can start in your dorm room that ends up scaling and monetizing eyeballs and addicting people is a perfect example. "How do I make a lot of money with one computer and a day or a week or a month's worth of code?" Most of the answers to that question are unhelpful to society, but as long as they're profitable and legal, people will pursue them.

He offers some speculative thoughts on solutions but to me they seem relatively unlikely to solve most of our problems, even if they can solve some of them. For instance, I think (or rather, hope) the combination of fitness+nutrition+healthcare+medical tracking can be fixed if dealt with as a single system. And by fixed, I mean encouraging fitness and providing only nutritious food in the community and thereby lowering everyone's overall food+fitness+healthcare costs, even if food and fitness costs go up. But I don't see how it works as a free market opt-in cluster of services. If you set up something like that, and it is overall cheaper than the status quo, how do you keep poor people and homeless people from signing up immediately and bankrupting your new system? They have a lot more incentive to switch to that new system than rich people (who would probably be subsidizing it to some extent which is a tough sell even if they end up happier and healthier).


The problems that really need solving are of a political nature. Who gets the rewards and who bears the risks, what are the public spending priorities, what are the rules on employers, multinationals, real estate developers, etc. Does a miraculous breakthrough in high-end medical research even matter when no one can afford basic healthcare?

The only people who can get up in the morning every day and work on these problems are lobbyists. Usually, they're working in the wrong direction. Regardless, does anyone really think we could have a lobbying-based economy?

The free market is working on frivolous problems because that's what's left to do when you don't have the force of a state behind you.


> The free market is working on frivolous problems because that's what's left to do when you don't have the force of a state behind you.

Can you clarify what you mean by this?


Startups are not working on the big problems in education, healthcare, housing/homelessness, income inequality, etc. because those are not problems you can solve by bringing innovative products and services to market, they're problems you solve with different public policy.

The space of problems that could hypothetically be addressed by a free-market actor is pretty well tended already; it isn't crazy that we see free-market actors working on things that look unimportant.


It is the job of governments to balance economy and social issues. Germany calls its approach Social Market Economy: https://en.wikipedia.org/wiki/Social_market_economy

The challenge is to find an international solution.


There are specific arguments as to why we should expect AIs to go from ridiculous to dangerous very quickly. You don't have to agree with them, but it's worth acknowledging that Musk et al do have a rationale behind their worries, one that doesn't apply to corporations etc.

Targeted advertising can be mutually beneficial. Of course corporations are incentivised to promote it beyond the point, to the point where it's actively harmful. But they're not the only game in town; countermeasures do exist and are practical (adblocking or simply ignoring ads). More generally, fun is inherently addictive, and the author isn't going to convince me to try to bar people from doing things they enjoy without drawing a more principled line between the two.

Likewise "propaganda", and in any case politicians have been lying for longer than anyone can remember, and our society has thrived; indeed political speech gets stronger protection than other forms of speech, which translates into free rein to stretch the truth further than an advertiser or regular person would be permitted to. Humans can learn to ignore hearsay and rhetoric and pay attention only to a politician's written manifesto. I'd argue we should've done so years ago.

Video can now be faked, sure. But it's always been cheap and easy to fake quotes, and again, this is something we've dealt with.

Mob violence apps... eh. I don't really buy that people can be "nudged" into the kind of huge behavioural change that getting involved in violence would be for most ordinary, peaceful people. Assuming of course we maintain strong social norms against violence. But who would be so foolish as to try to dismantle those?


I do not acknowledge that Musk et al have a rationale behind their worries. It is uninformed speculation. Right now we have no plausible path toward building an AGI. There's no cause for worry until someone builds a computer at least as smart as a mouse.


Stross seemed to be talking about AI more as systems operating beyond human comprehension rather than "intelligence" like you or me or a mouse would have. A hyper-efficient but brainless agent could still be incredibly destructive, just like viruses or locusts are


What rank is the best go-playing mouse?


How well can AlphaGo navigate a complex social hierarchy with no clear rules?


How confident are you that that skill is required for wiping out humanity?


Four Sci-Fi books that get projections right:

* 'Paris in the Twentieth Century' by Jules Verne

* 'Brave New World' by Aldous Huxley

* '1984' by George Orwell

* 'The Space Merchants' by Frederik Pohl and Cyril M. Kornbluth (1952)

Verne predicts correctly many technologies and changes they cause in the society. Probably the most accurate future projection overall.

The three other books: The Brave New World, 1984 and The Space Merchants neatly complement each other. Our reality is a mix of the three themes represented in these books.

Some people think that 'Stand on Zanzibar' should be mentioned.


"A Logic Named Joe" has predicted the Internet and many of our current problems in the mid-40's:

https://en.wikipedia.org/wiki/A_Logic_Named_Joe

https://www.theregister.co.uk/2016/03/19/a_logic_named_joe/


One of my favorite "X Minus One" episodes: https://archive.org/details/OTRR_X_Minus_One_Singles/XMinusO... .


The film Sleep Dealer[1] jumped straight to 1the top of my favorite cyberpunk fiction when it was released in 2008. It offered a different perspective on modern corporations and how they are using technology that is very different from the usual narrative told in (usually white, usually >= upper middle class) more privileged areas.

After the drama at Uber over the last ~yearSleep dealer, Sleep Dealer's now seems surprisingly prescient.

(warning: the trailer[2] is terrible and leaves o9ut ~2/3 of the movie's themes, which also include a criti8que of drone based warfare in parallel with it's discussion of Uber-style corporations optimizing labor)

[1] https://en.wikipedia.org/wiki/Sleep_Dealer

[2] https://www.youtube.com/watch?v=nbJGQl-dJ6c


'Stand on Zanzibar' is a work of genius, truly.


'Stand on Zanzibar' is absolutely eerie in how good it predicted the future. In the non-fiction department, Zbigniew Brezinski's "The Technetronic Era" is also disturbingly prescient.


god yes, Space Merchants is the most prophetic fucking thing I ever read as a kid devouring SF in the 70s. It seemed like such broad, impossible parody back then, but here we are with ads invading every single possible surface of our lives and corporations flouting regulations with impunity.


'Player Piano' by Kurt Vonnegut- but only half of its premise (sadly, the more hopeful half)


adding space merchants and Zanzibar to my reading list

The article's discussion of corporations also reminds me of Snow Crash, which is not so old and also pretty light hearted but full of good food for thought nonetheless


Can anybody cite prior art on the notion that corporations are paperclip maximizes? It's not really a profound idea, I don't think, but the notion came to me about a month ago, and shortly after SciFi author Ted Chiang[1] said the same thing on BuzzFeed, and here's Charlie Stross saying it. Is this an example of simultaneous invention, or has it been kicking around for a while, and I just haven't noticed?

[1]https://www.buzzfeed.com/tedchiang/the-real-danger-to-civili...


The earliest I can recall being exposed to the concept was this 2013 blog article: http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-...


It's funny, I also thought of it mere seconds before reading it in this article. I have this bad habit of scrolling to the middle of a document before reading it to see if the first half is even worth reading (if I'm confused, then yes it is) and when he mentioned "paperclip maximizers" without context to capitalism my first thought was to tie it to corporations. It's so obvious now that I'm embarrassed I didn't think of it first.


There's a related idea in a movie made in 2003, called "The Corporation". The movie is pretty strongly anti-corporate, and the main point of it is that, in their role as actors in the legal system, "legal persons", corporations are deeply sociopathic, and perhaps ought not to be tolerated, or at least should be more regulated.

It's been a while since I saw it, but I think a few AI-like features are mentioned, even if the corporations-as-AIs idea is not spelled out. Their lifespan, in particular, I recall coming up.

[1] https://en.wikipedia.org/wiki/The_Corporation_(film)

Edit: Removed markdown formatting.


I feel like this is a sentiment that should be fairly familiar to you if you have read Stross' previous works. There are points in Accelerando (2005) where semisentient AI corporations are a threat to the protagonist, along with things like automatic legal-system denial-of-service attacks on other corporations.

If you haven't read it, you should. It is here.: http://www.antipope.org/charlie/blog-static/fiction/accelera...


Stross has articulated the general concept before:

http://www.antipope.org/charlie/blog-static/2010/12/invaders...

(which I can find because of my 2 year old comment: https://news.ycombinator.com/item?id=11488412 )


Organizations-as-AIs (both corporations and governments) was brought up in Engines of Creation in the mid-80s. I suppose Drexler got the idea from the MIT AI culture (I don't remember anything specific about corporations, but there was e.g. Hewitt & someone else's "The Scientific Community Metaphor") and Hayek.


I think perhaps it's worth thinking about the view of natural selection which historically has created a system of participants programmed to maximize their share of the gene pool. I think it's at least a passable analogue for a system of corporations that are each designed to maximize profit.

The interesting bit is that what appear to be rigid "paperclip maximizers" (I love this term) evolve secondary behaviors. Emotions, religions, moral codes, and government. This leads to the question that if the system of corporations is a secondary behavior from the natural selection pressures applied to humans, while simultaneously being an analogue of that very system, will we see similar tertiary behaviors emerge from this system? Will corporations develop religion, moral codes and governance?

One can arguably point to standards bodies as a form of governance (more nebulously, one could suggest that corporations have repurposed human governance systems as a tool for governance that serves corporations). My best way of framing moral codes are as being comprised of a behavior that in isolation gives the self less advantage, but through cooperation becomes mutually beneficial to the participants - something that prevents harm of the self by not harming others. This sounds a lot like what we would label "anticompetitive behavior"!

I had a lot of fun following this train of thought, but the analogue seems to get extremely flimsy beyond this point. Morals are relative, they evolve as society changes, and the whole process seems much more accelerated when it comes to corporations that predicting anything or providing suggestions through this approach seems silly. I still think it's a fun analogue though, and it does make me curious what sorts of unintuitive behaviors a corporate world could give rise to in the future.

Excellent speech - both content and delivery (through the video).


> Will corporations develop religion

According to the Civ1 tech graph, which I consider the authoritative resource, they just need to develop philosophy and writing in order to research religion. They definitely got the writing part nailed. Also they are dabbling with philosophy. I think they'll make it.

On a slightly more serious note, some people (yep, I have read Sapiens) would argue that religions, ideologies, and corporations are all variations of the concept of shared belief systems. That is, the human ability to collectively make stuff up and pretend that it's real. Thus, corporations and religions are more or less the same thing. It's an interesting way to look at it IMHO.


Charles Stross read the text at the 34C3. If you want to listen to it rather than read it:

[1]: https://www.youtube.com/watch?v=RmIgJ64z6Y4

[2]: https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future

EDIT: Sorry, the read mode on mobile did not show the embedded video.


> We science fiction writers tend to treat history as a giant toy chest to raid whenever we feel like telling a story. With a little bit of history it's really easy to whip up an entertaining yarn about a galactic empire that mirrors the development and decline of the Hapsburg Empire

It is really fun, while reading a science fiction or fantasy book, to recognize the history that is being plundered for the story, and then be able to predict the next few plot developments.


Why stop at "corporations"? How about any sufficiently large social group?


What a distorted view of the future. Distorted by negativity fixated on events and possibilities that are blown far out of proportion and context, that ignores the tremendous progress and promise in the world of the present and the future.

Cynicism is such a fashionable attitude these days. It's a strain of cynicism about our institutions - both public and private - that has been present in western culture for a very long time. A healthy amount of cynicism is good, because it itself can be a source of progress, but too much cynicism produces an unrealistic and clouded outlook.


What's your optimistic take on the trends he's talking about?


"That mistake was to fund the build-out of the public world wide web—as opposed to the earlier, government-funded corporate and academic internet—by monetizing eyeballs via advertising revenue."

TL;DR: "Dude" discovered the universe likes to optimize, and identifies top-down regulation as the countermeasure. Political plugs abound ("Nazis took over the US").

Previous discussion: https://news.ycombinator.com/item?id=16032643


It's as if ads are some terrible injustice brought down upon the weak and helpless masses...

I really don't mind them.


> I really don't mind them.

I understood Stross's point when talking about ads isn't that ads are bad per se (although they're pretty bad - do you use an ad-blocker?) but that they come along with a lot of unwanted other stuff because the entities serving ads are set up as increasingly efficient advertising machines.

For example, Facebook (by design) becoming ever-increasingly demanding of attention by producing an endless stream of new notifications and features to increase capture your time spent on the platform. This develops into an attritional battle with the user for supremacy rather than a subservient tool to improve their lives.


[flagged]


Maybe you're a victim of exactly what he is describing. No? Of course not. Help yourself to another steaming hot dollop of outrage from wherever you get it.


Only occurrence of the word "punch" in the article:

> if someone gives you racist or sexist abuse to your face you can complain (or punch them). But it's impossible to punch a corporation

I can't imagine what you think is controversial about that.


So I had to look up SJW and found out it is used to mean "Social Justice Warrior", as a pejorative. You're going to need better pejoratives, punk!


Yeah, "SJW" basically means "outspoken anti-bigot," and using it as an insult says a lot more about the speaker than their target. Kinda like "intellectual" and "cuckold".


"SJW" basically means "outspoken anti-bigot"

Are you sure about that? I'm pretty certain that outspoken anti-bigots and people commonly labeled as SJWs are not even in a proper subset relationship in either way.


Google <sjw star wars>. There's a large group of people convinced that the new Star Wars movies were "ruined" by "SJWs" because they have too many female characters.


I'd like to see more simulation in future prediction. Or a bracketing of worst/best cases based on different scenarios.


I found this piece the most interesting --

>Or imagine you're male and gay, and the "God Hates Fags" crowd has invented a 100% reliable Gaydar app

For mobs to act, it doesn't need to be 100% reliable, or even 25%, so long as the mob gets their kicks.

Just as likely, given the average political outlook of the app dev community, is a "Payback the NRA" app that lets users target gun owners for liquidation. Something tells me the author wouldn't mind that app so much.


Previous discussion on the video: https://news.ycombinator.com/item?id=16032643


Dude, you broke the future 2x.


[flagged]


Specifics, please.

Stross' contribution made me think. Yours didn't.


Oh, I thought I was already shadowbanned, didn't realize anybody would see this.


[flagged]


Where have you been for the last two years?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: