Another point is that companies can be bigger now. There are scaling problems with heavy industry. General Motors ran into scaling problems decades ago - they were too big to get out of their own way, and management couldn't keep track of what was happening in distant plants. Google and Apple don't seem to have that problem. Neither does Maersk, the world's largest shipping line. Or DHL or FedEx or UPS or Amazon or WalMart or McDonalds.
Industrial civilization is even younger than Stross says. Less than 200 years. A good start date is the opening of the Liverpool and Manchester Railway Railway, 1830. For the first time, anyone could buy a ticket on a scheduled train and go someplace. This is when the industrial revolution got out of beta and started to scale. There were steam engines for a century before that, but they were isolated one-offs. The Manchester and Liverpool had many engines, double track, stations, timetables, signals, and paying customers. That's when things really started to move. Quite literally.
I'd say that there is whole another kingdom of AIs that are apex predators. Armies. Corporations have nothing on them. Corp AIs politely pay taxes to armies and they use the resources to develop and restrict access to most dangerous tech.
We already had two top tier AI wars. And one cold AI war that almost ended everything. What saved us was apex AI survival instinct.
What corporations do is second tier stuff. They are herbivores of this ecosystem.
We are the plants.
Op's fear about corporations feasting on our attention falls flat on me.
I'm more afraid to become useless to corporations and by proxy to armies that being something they need for their survival.
If facebook needs me despite the fact I don't work for it and don't buy any of it's products is wonderful news for me, because it means I'll survive longer.
States, more explicitly.
The insight that human organizations can be seen as superorganisms (and perhaps superintelligences) is right, and applies to states, businesses, municipalities, churches, etc.
But the whole point of the insight isn't to be generally suspicious of these semi-alien systems that are generally more powerful than individual humans. It's to think more carefully about how to program and limit them, just like with AIs.
One of the nice thing about at least one of the two state AIs in that cold war you were talking about is that people did/do think about this and we've ended up with a system fairly-well designed to operate with broad user input. It's possible it could use some improvement but it's probably better than a broad segment of constituent humans is prepared to take advantage of.
Business AIs are generally pretty good at taking advantage of that, though. In fact, there's a credible argument they're exploiting features of the system to heavily influence the first tier.
Not quite. States are just facades for armies. When state fails to collect taxes for the army it is simply replaced.
I'm not so sure that peace between top AIs comes from user input. IIWW made armies realise they can't win a fight and they suffer trying. Thanks to that expeirience cold war was won by waiting opponent out. Humans in all of this were just cogs not the users.
From that perspective the Eisenhower Warning was at least a half-century or so too late; the military-industrial process built the modern army (not vice versa) in its AI paperclip-maximizing.
Corporations are optimization processes. Advertising is an optimization process. Social networks are. Smartphone addiction. Political polarization. Television, mobile games, and most forms of entertainment.
This prevalence, and the fact that it's not EVIL doing it but just amoral goal-directed processes, seems to me to be the key to recognizing, fighting back, and fixing society.
We have to figure out some way to fight for our human values, against these optimization processes. I don't think Stross has (or claims to have) a strong answer there... any ideas?
So, right now we use abstract metrics like GDP or stock indices as a metric for 'success'. So countries optimize for GDP and companies optimize for making money and delivering monetary value to their shareholders.
Maybe we could collectively view those dollar figures as value-neutral and use metrics like international education scores, life expectancy, access to quality nutrition and medical care, incarceration/recidivism rates, etc. to indicate 'success'.
But again, no concrete ways to act on that. What, should we start telling people that money is useless and doesn't matter? I don't think they'll believe us while so many don't have access to apirational opportunities, or even basic necessities. But don't we have the means to provide those things?
Here's a really nice introduction.
Obvious counterpoint: Money is also a metric susceptible to Goodhart's law. Is it better or worse if the metric is useless except as a target?
It's possible that passion is now the main resource needed to get your political message out, and radicals tend to be more passionate than the rest of us.
Of course, to moderates, this looks terrible. I consider myself a moderate (or at least I did before the last election; the overton window, it would seem, has moved) and so I personally think this looks terrible, and it's not the most likely possibility, in my opinion. But you could construct a self-consistent narrative in which this was simply democracy in action; it matches much of the available evidence.
The big hole in this explanation is the evidence of foreign powers buying advertising to prop up our radicals. My understanding is that the investigation in that direction is still ongoing... but even then, the idea that a (much poorer) nation could buy our election would be in line with what I suggested about the presses (or at least the opinion-shaping part of the presses) being much cheaper than expected.
The gist of it is that he believes markets are good at solving a particular problem, given material constraints and demand, but that markets are awful at determining what problems to solve. There are relatively weak and few incentives and punishments for corporations (other than for things that are outright illegal), so the market is approximately solving whatever problems have the lowest barriers to entry. And a website you can start in your dorm room that ends up scaling and monetizing eyeballs and addicting people is a perfect example. "How do I make a lot of money with one computer and a day or a week or a month's worth of code?" Most of the answers to that question are unhelpful to society, but as long as they're profitable and legal, people will pursue them.
He offers some speculative thoughts on solutions but to me they seem relatively unlikely to solve most of our problems, even if they can solve some of them. For instance, I think (or rather, hope) the combination of fitness+nutrition+healthcare+medical tracking can be fixed if dealt with as a single system. And by fixed, I mean encouraging fitness and providing only nutritious food in the community and thereby lowering everyone's overall food+fitness+healthcare costs, even if food and fitness costs go up. But I don't see how it works as a free market opt-in cluster of services. If you set up something like that, and it is overall cheaper than the status quo, how do you keep poor people and homeless people from signing up immediately and bankrupting your new system? They have a lot more incentive to switch to that new system than rich people (who would probably be subsidizing it to some extent which is a tough sell even if they end up happier and healthier).
The only people who can get up in the morning every day and work on these problems are lobbyists. Usually, they're working in the wrong direction. Regardless, does anyone really think we could have a lobbying-based economy?
The free market is working on frivolous problems because that's what's left to do when you don't have the force of a state behind you.
Can you clarify what you mean by this?
The space of problems that could hypothetically be addressed by a free-market actor is pretty well tended already; it isn't crazy that we see free-market actors working on things that look unimportant.
The challenge is to find an international solution.
Targeted advertising can be mutually beneficial. Of course corporations are incentivised to promote it beyond the point, to the point where it's actively harmful. But they're not the only game in town; countermeasures do exist and are practical (adblocking or simply ignoring ads). More generally, fun is inherently addictive, and the author isn't going to convince me to try to bar people from doing things they enjoy without drawing a more principled line between the two.
Likewise "propaganda", and in any case politicians have been lying for longer than anyone can remember, and our society has thrived; indeed political speech gets stronger protection than other forms of speech, which translates into free rein to stretch the truth further than an advertiser or regular person would be permitted to. Humans can learn to ignore hearsay and rhetoric and pay attention only to a politician's written manifesto. I'd argue we should've done so years ago.
Video can now be faked, sure. But it's always been cheap and easy to fake quotes, and again, this is something we've dealt with.
Mob violence apps... eh. I don't really buy that people can be "nudged" into the kind of huge behavioural change that getting involved in violence would be for most ordinary, peaceful people. Assuming of course we maintain strong social norms against violence. But who would be so foolish as to try to dismantle those?
* 'Paris in the Twentieth Century' by Jules Verne
* 'Brave New World' by Aldous Huxley
* '1984' by George Orwell
* 'The Space Merchants' by Frederik Pohl and Cyril M. Kornbluth (1952)
Verne predicts correctly many technologies and changes they cause in the society. Probably the
most accurate future projection overall.
The three other books: The Brave New World, 1984 and The Space Merchants neatly complement each other. Our reality is a mix of the three themes represented in these books.
Some people think that 'Stand on Zanzibar' should be mentioned.
After the drama at Uber over the last ~yearSleep dealer, Sleep Dealer's now seems surprisingly prescient.
(warning: the trailer is terrible and leaves o9ut ~2/3 of the movie's themes, which also include a criti8que of drone based warfare in parallel with it's discussion of Uber-style corporations optimizing labor)
The article's discussion of corporations also reminds me of Snow Crash, which is not so old and also pretty light hearted but full of good food for thought nonetheless
It's been a while since I saw it, but I think a few AI-like features are mentioned, even if the corporations-as-AIs idea is not spelled out. Their lifespan, in particular, I recall coming up.
Edit: Removed markdown formatting.
If you haven't read it, you should. It is here.: http://www.antipope.org/charlie/blog-static/fiction/accelera...
(which I can find because of my 2 year old comment: https://news.ycombinator.com/item?id=11488412 )
The interesting bit is that what appear to be rigid "paperclip maximizers" (I love this term) evolve secondary behaviors. Emotions, religions, moral codes, and government. This leads to the question that if the system of corporations is a secondary behavior from the natural selection pressures applied to humans, while simultaneously being an analogue of that very system, will we see similar tertiary behaviors emerge from this system? Will corporations develop religion, moral codes and governance?
One can arguably point to standards bodies as a form of governance (more nebulously, one could suggest that corporations have repurposed human governance systems as a tool for governance that serves corporations). My best way of framing moral codes are as being comprised of a behavior that in isolation gives the self less advantage, but through cooperation becomes mutually beneficial to the participants - something that prevents harm of the self by not harming others. This sounds a lot like what we would label "anticompetitive behavior"!
I had a lot of fun following this train of thought, but the analogue seems to get extremely flimsy beyond this point. Morals are relative, they evolve as society changes, and the whole process seems much more accelerated when it comes to corporations that predicting anything or providing suggestions through this approach seems silly. I still think it's a fun analogue though, and it does make me curious what sorts of unintuitive behaviors a corporate world could give rise to in the future.
Excellent speech - both content and delivery (through the video).
According to the Civ1 tech graph, which I consider the authoritative resource, they just need to develop philosophy and writing in order to research religion. They definitely got the writing part nailed. Also they are dabbling with philosophy. I think they'll make it.
On a slightly more serious note, some people (yep, I have read Sapiens) would argue that religions, ideologies, and corporations are all variations of the concept of shared belief systems. That is, the human ability to collectively make stuff up and pretend that it's real. Thus, corporations and religions are more or less the same thing. It's an interesting way to look at it IMHO.
EDIT: Sorry, the read mode on mobile did not show the embedded video.
It is really fun, while reading a science fiction or fantasy book, to recognize the history that is being plundered for the story, and then be able to predict the next few plot developments.
Cynicism is such a fashionable attitude these days. It's a strain of cynicism about our institutions - both public and private - that has been present in western culture for a very long time. A healthy amount of cynicism is good, because it itself can be a source of progress, but too much cynicism produces an unrealistic and clouded outlook.
TL;DR: "Dude" discovered the universe likes to optimize, and identifies top-down regulation as the countermeasure. Political plugs abound ("Nazis took over the US").
Previous discussion: https://news.ycombinator.com/item?id=16032643
I really don't mind them.
I understood Stross's point when talking about ads isn't that ads are bad per se (although they're pretty bad - do you use an ad-blocker?) but that they come along with a lot of unwanted other stuff because the entities serving ads are set up as increasingly efficient advertising machines.
For example, Facebook (by design) becoming ever-increasingly demanding of attention by producing an endless stream of new notifications and features to increase capture your time spent on the platform. This develops into an attritional battle with the user for supremacy rather than a subservient tool to improve their lives.
> if someone gives you racist or sexist abuse to your face you can complain (or punch them). But it's impossible to punch a corporation
I can't imagine what you think is controversial about that.
Are you sure about that? I'm pretty certain that outspoken anti-bigots and people commonly labeled as SJWs are not even in a proper subset relationship in either way.
>Or imagine you're male and gay, and the "God Hates Fags" crowd has invented a 100% reliable Gaydar app
For mobs to act, it doesn't need to be 100% reliable, or even 25%, so long as the mob gets their kicks.
Just as likely, given the average political outlook of the app dev community, is a "Payback the NRA" app that lets users target gun owners for liquidation. Something tells me the author wouldn't mind that app so much.
Stross' contribution made me think. Yours didn't.