Hacker News new | past | comments | ask | show | jobs | submit login
AI thinks like a corporation (2018) (economist.com)
140 points by samclemens on Feb 4, 2020 | hide | past | favorite | 60 comments



There's a great article by Ted Chiang about this[1], comparing the psychology of the paperclip AI scenario to the single-minded "increasing shareholder value" mentality of business, in particular, silicon valley startups.

He points out that the companies of the very people who fear the ai-apocalypse do already what they're accusing AI of, pursuing growth and 'eating the world' at all cost only to maximize narrowly defined singular goals.

[1]https://www.buzzfeednews.com/article/tedchiang/the-real-dang...


I’d actually turn both this and the original article around, and say that corporations behave like AIs. They are artificial. They are (variably) intelligent.

The functions of processing inputs to create meaningful outputs are carried out by neurons made of meat (human employees) and of silicon. These processes are governed by rules and metaprocesses, which the entity continuously optimises and improves, in order to further its own growth and to improve its fitness for its purpose - typically, enhancing shareholder value. I cannot think of a single criteria for intelligence or even life that a corporation does not possess, from independent action to stimulus response to reproduction to consumption to predation.

I think the first true hard AI will emerge accidentally, and it will be a corporation that has largely optimised humans out of the loop - but even with humans as a component of a system, a gestalt, an artificial supra-intelligence, can still emerge.

This also neatly sidesteps the whole question of “should AIs have rights?”, as corporations are already legally persons.


I agree: http://beza1e1.tuxen.de/companies_are_ai.html

It leads to an interesting question: Is a corporation controlled by its CEO?


Nice - that overlaps pretty much perfectly with my reasoning!

In my view, no. The CEO is at best a race condition breaking function, at worst, a parasitic infection.

If corporations are self-optimising AIs, then they will optimise CEOs out of existence if they aren’t conducive to fitness.


Does that not suggest that CEOs are worth their cost to the vast majority of companies that retain them? What companies operate without a de facto CEO?


Well, CEOs may be inherited from non-optimal initial conditions of early ancestors and simply haven't been replaced by a better competitor yet. Or they have been and we don't understand that yet. Naturally occurring genomes encode for processes that result in all sorts of things that don't actually have to occur. The appendix, the frenulum, most of the DNA itself, as far as we can tell. Nature's full of random dead-ends that keep propagating because the system hasn't gotten over the barrier of the next local minimum yet.


Chicken can run faster if it had optimized its head away. Because it can ignore limiting factors like potential obstacles in its current course. But for how long?

Actual problem solving in a corporation is done by humans. Corporation policy can provide a set of incentives to align goals of its employees with the goal of the corporation and a structure to more or less effectively combine specialists' efforts. But someone had to think about the corporation as a whole and change its policies if the need arises.

Cephalization is a known evolutionary trend after all.


CEOs are controlled by their context, i.e. owners and regulations (for publicly-traded companies) which drive the profit-motive, and then have agency to take various actions to achieve this goal.


I have a strong feeling that if we went on like that and one day there will be an AI which will optimize you out - "Humans do not add much value to things, they are just cost centres."


Humans add all the value and always will, because that's what value is - what humans want. If an AI has some goal independent of humans, it's no different from a car rolling downhill because the parking brake failed.


Sure, but good luck convincing an AI that after it has power. It's important to build the AI to think that way from the start if it is capable of getting power over us.


Reminds me of this great SMBC comic -https://www.smbc-comics.com/comic/2012-04-03

We need to be careful of this kind of utility optimization, even when it readers to humans.


It's been a long time since I took a course in philosophy, and all I recalled was John Stuart Mill being associated with utilitarianism.

Looking at his Wikipedia page, it appears that his idea of the utility associated with pleasure was independent of the specific person. If X is the amount of pleasure humanity gets from something, and Y is the number of people that exist, then the pleasure an individual is assumed to get is X/Y. If I understand correctly.

If someone were to abuse the concept of utility by considering it to differ between people, it seems to me a lot more likely that they would say something like "I get a unit of utility from this candy bar, but many other people do/would not, so I don't need to share with them", rather than "I get millions of units of utility from this candy bar and others only get a normal amount".


Great quote: "There are industry observers talking about the need for AIs to have a sense of ethics, and some have proposed that we ensure that any superintelligent AIs we create be “friendly,” meaning that their goals are aligned with human goals. I find these suggestions ironic given that we as a society have failed to teach corporations a sense of ethics, that we did nothing to ensure that Facebook’s and Amazon’s goals were aligned with the public good. But I shouldn’t be surprised; the question of how to create friendly AI is simply more fun to think about than the problem of industry regulation, just as imagining what you’d do during the zombie apocalypse is more fun than thinking about how to mitigate global warming."

The narrative that the danger is corporations might accidentally let AIs gain power is appealing but indeed ought to be challenged.

The paperclip/strawberry monomania fear masks the way things how things are shaping up now. AI is far from being to able intelligently plan over a time horizon - maintaining a real world process all the way to even a smallish goal requires real world corner-case handling that the best AIs haven't got (self-driving cars have five years away for 10-15 years, etc).

But AIs as "intelligent" filters are here now. These allow "better" decision making along with unaccountable decision making, both of which indeed fit well with the agenda of the modern corporation. And that's the thing - the modern corporation is already short sighted, the process of ignoring the potential for climate happened well before automated decision making appeared. But the modern corporation is still more far seeing than a deep learning network and still theoretically and legally accountable to society. However, indeed, deep learning networks that only maximize those qualities that these corporation want maximize and ignore other considerations suit the indeed short-term preferences of the institutions.


In which a science fiction writer weaves a fantastical tale about how the Doomsayers are making their decisions from fantastical tales and emotions, instead of actually reading and responding to the arguments they present. The irony is so palpable that if it wasn't so disingenuous, it might even be funny.

Here's some actual insight on the topic: https://www.youtube.com/watch?v=L5pUA3LsEaw


The article has some neat points but I feel like it's confusingly tugging in opposite directions. It's weird to assert that AIs couldn't possibly be accidentally made to be singularly-focused on a task that's harmful in excess, but groups of people as corporations totally could. But also AIs and corporations are the same. But also only sociopaths would worry that AIs might possibly be accidentally made without human concerns, that's science fiction, only groups of people could manage to operate without human concerns.

I think the point of thinking about how we regulate corporations so they don't focus on singular goals in a harmful way is important. I just find the article's framing weird, as if it thinks people can only care about only one of AI alignment or corporation alignment, and that it's desperately important that they pick #2 instead of #1. I think the topics are actually very related and it's actually useful to think of them as different examples of the same problem. Thinking about AI alignment can help us understand how corporations should be regulated, and vice-versa.

(I think AI safety/friendliness/alignment problem is important, but it's really only relevant to human-level AGI. It's not about modern-level AI / machine learning / algorithms / whatever. No modern AI/ML understands the world at a human level and generates its own goals and motivation in a general sense. There's plenty of ethical issues about how we use modern AI/ML in businesses and so on but it's pretty separate from the AI alignment problem. The AI alignment problem is only an imminent issue further down the line, like "how do we build moonbases that don't fail?", which is interesting, and I'm tired of tons of articles bringing up the alignment problem like "There's some weirdos wondering about how to make buildings in space, but don't they know there's nothing to build things out of in low-earth orbit?? Are they just delusional?".)


So the corporations project their own immorality on AI.


Only if embedded in the cost function or the specific implementation.

AI has no concept of morality.


Really the Paperclip Maximizer bugs me as an example for all of the wrong "missing the point" reasons like Buridan's Ass in that a donkey wouldn't starve to death between two equidistant hay piles.

Anyone who sets it up to maximize raw output of very simple commodities is doing things incredibly wrong economically in the first place. They would go out of business in the first place paying for storage costs alone - let alone the access of everyone else to counter the rogue self-replicating.

It would be admittedly silly as an objection for the metaphor if not for the fact people really do believe AI would be that way.


>They would go out of business in the first place paying for storage costs alone - let alone the access of everyone else to counter the rogue self-replicating.

These things reduce the total number of paperclips produced, and therefore would be accounted for by such an AI. The Universal Paperclips [https://www.decisionproblem.com/paperclips/index2.html] game has a pretty good take on it.


There's an article with a similar premise by Charlie Stross [1]. Though both seem to apply to basically any software, not just statistical (or whatever currently falls under "AI"), and perhaps (AIUI) can be generalized to just (over)simplified models.

[1] http://www.antipope.org/charlie/blog-static/2018/01/dude-you...


For another take on this corporations ≈ ai idea, I cannot recommend enough Charles Stross' 34C3 talk on this very topic. https://www.youtube.com/watch?v=RmIgJ64z6Y4 (picks up approx 10mins in).


This could go wrong in so many ways. Perfect way for un-accountability: the algorithm decided and we don't know why because it isn't transparent.

But there's some good potential too. It is a tool, it depends how we wield it or more precisely "who".


Depending on what tool it would be comparable too and how it was used an algorithm or ML model or "AI" would probably end up in the same dual-use discussions as we have now.

A hammer that hammers is a good hammer. So if a system of ML/AI were to be optimised to hammer well (and 'be' a hammer) that is fine. If someone then would decide to use that hammer to attack someone and break their bones, would that be the hammer's fault? Probably not. But when you get a hammer that is optimised for bone-breaking the discussion changes. Same with a kitchen knife vs. hunting knife vs. kabar which are all knives but are also all 'named' with different intentions. And suddenly it's no longer dual-use or 'who is at fault' but in the very grey area of 'what was the intention'. And that brings us back to 'transparency' which loops back in to "what if it is just a tool". Darn circles.


Certainly more quantitatively scruitable than any human system.

A human can tell you a more or less convincing narrative about why they did something, but you have fewer independent and objective ways to measure that than you do basically any conceivable computing system.


Yes sure, but the data and interpretation might be meaningless when explained back to humans, it's like a an intellectual showing off with terms and formulas:

"the algorithm did this because ..." followed by a debatable and unintelligible formula + data + stats that nobody would really understand and a new science needs to be invented to understand and so on


I think the common-man conception of AI is really flawed. Consider that autopilot -- clearly a form of AI -- was invented in 1914. AI is far older than computers and corporations are a clear and excellent example. It's a slippery slope -- and I think it is worth sliding all the way down. Once we realize how completely pervasive AI is, I think we have a much better understanding of how to govern it in the future.

Personally, I find cybernetics to have a much stronger philosophical basis than AI; I particularly like how Norbert Weiner described how cybernetic feedback loops can lower local entropy.


I can't think of the definition of AI you could be using to say the autopilot is an AI. Autopilot didn't teach itself to fly the plane - it was explicitly programmed to do so.


Did Deep Blue teach itself to play chess?

I’m not sure that “self-teaching” should be considered the defining criterion for AI.


It's not. Unless you want to rename decades of symbolic AI, probabilistic AI, and everything else that's not machine learning.


What human pilot taught themselves how to fly a plane? Arguably the Wright brothers and a few others, but otherwise humans are in approximately the same boat as the autopilot. It's just that the programming interface is a bit more sophisticated.


Us humans like to think that we are the ultimate organisms and everything else is either the material that composes us (cells and tissue) or our products (corporations, cities).

Organisms are scale free systems. It's only our human bias to see humans as importany. If aliens came to visit the Earth, they most likely see this planet covered with massive algae which glow at night (ie. our cities). Cities at night are more noticeable from space than the cells which they are composed of - tissue cells (houses) or blood cells (cars and pedestrians).

For some reason all aliens in movies are pictured as these humanoid and human sized creatures which is most likely due to the same human-centric bias.


I strongly recommend the book version (the films by Tarkovsky and Soderbergh are okay but the book is better) of Solaris. It explores this idea of an alien life form that is decidedly un-humanoid.

[Lem, the author]... wrote that he deliberately chose the ocean as a sentient alien to avoid any personification and the pitfalls of anthropomorphism in depicting first contact.

https://en.wikipedia.org/wiki/Solaris_(novel)


Lem was good at having alien aliens in a way I haven't seen any other author pull off. I also recommend Fiasco [0].

[0]https://en.wikipedia.org/wiki/Fiasco_(novel)


>Us humans like to think that we are the ultimate organisms and everything else is either the material that composes us (cells and tissue) or our products (corporations, cities).

And yet if we perished from the planet, literally every single other species, animal mineral or vegetable, would benefit from our absence. Everything would grow back at an amazing rate and it would be like we were never here.

Meanwhile, if the bees disappear, we're all screwed.


Even in books, where the limitations of costumes and sets are less relevant, most aliens are at least somewhat humanish. Though there are certainly some more exotic ones like the pattern jugglers, First Person Singular, or the Prime.


An aside, but Adrian Tchaikovsky‘s Children of Ruin/Time center on intelligent spiders and mollusks. Earth-centric as they were bio-engineered from Earth native species, but an interesting take on non-human species.


If we're name-dropping, the Vernor Vinge pack-mind dog things are another fun example of non-human races with actually non-human modes of cognition, even if they're not as far removed as Solaris/Revelation Space sentient oceans. In contrast, although I really liked A Deepness in the Sky for other reasons, the spiders ended up feeling way too much like humans in funny bodies rather than any kind of actual non-human intelligent species.


Really comes down to a lack of imagination. I imagine there's a good chance life elsewhere is unrecognizable to us as life. No id, no ego, just a vivacious electrochemical reaction bent on survival. Think viruses on a larger scale.


"Good chance" based on what?


The size of the universe, and our relative size within it.


That's a non sequitur.


Color me unconvinced



“Individual” and “collective” is a fundamental/metaphysical concept. General purpose AI probably will have a concept of individual influence/veto among its “nodes” so that it is capable of self organizing and reorganizing.

It remains to be seen if corporate-like structure is something it surpasses, but complex systems seem to usually have orders of influence, and multiple hands on any given wheel.


Obligatory plug for The Corporation (book & film)

https://thecorporation.com/

The similarity I see between AI and corporations is single-mindedness. They ignore anything outside the scope of their interest, "externalities" in the parlance of The Coporation. Those externalities can have very real consequences.


My advanced image recognition matched that picture to:

1) Rectilinear behive (95% confidence) 2) Rectilinear jail (5% confidence) 3) Ancestral forest home (0% confidence)


ML definitely doesn't. Not a hierarchical system at all. There have been AIs which work like hierarchical control systems, but ML systems are almost the anti-pattern to that.


ML is hierarchical. the upstream layers categorise more abstractly and downstream layers categorise more specifically.


Not all deep learning is hierarchical and not all machine learning is deep learning.


Research needs money.

Big money tends to go towards what generates great returns.

Thus a lot of research is devoted to dull tasks that bring business value.

No need to involve AI, or research for that matter.


AI thinks like a corporation in the sense that neither really thinks. One is misleading buzzword for modern computer-driven pattern recognition & the other is a legal label for a group of people trying to make money together. This premise is kind of silly.


This is understandeable: Both AI and corporations are psychopaths - they have no emotions to get in the way of rational thinking, with ultimate goal of maximizing their personal gain.

Question remains: how do we teach AI empathy ?


We teach AI empathy by programming it with the heurestics of care--optimizing for human well-being instead of profit. A difficult thing to achieve when the entities paying for the design of AIs are themselves driven by the profit motive, and for-profit AIs are self-perpetuating.


I think there's an argument that "empathy" (or even its wider-scoped sibling "compassion") has a potentially sociopathic element: a sort of cynical manipulation to appear virtuous to the tribe, one which is more effective if it convinces our own brains, so as to more effectively convince others. See Tim Wilson's "Strangers to Ourselves", Hanson & Simler's "Elephant in the Brain", etc.

Supposedly, there's also an empathic element to being an effective hunter, both in the human and the animal kingdom. To deeply understand and empathize with your prey helps you capture and devour them. Indeed, the marketing, advertising, and product development divisions of corporations can be deeply "empathic" to the desires of the buyer, without necessarily being to their benefit.

At any rate, I don't disagree; corporations are agnostic to societal externalities, and highly incentivized to create habit-forming relationships with customers, hence often leading to behavior indistinguishable from sociopathy. To some extent, I think the best we can hope for is aligned incentives; sometimes what's best for a company's bottom line really is to make people's lives better. But we shouldn't be naive to exploitative relationships (even when nominally voluntary), nor should we lean on "The Market" as the singular societal organizing principle.

Extrapolating to AI, we should be very cautious as to what metrics we optimize any particular algorithm for. Corporations optimizing for stockholder returns are ultimately a subset of "paperclip maximizing"; what we really want is a balance of multiple leading indicators of success, and to constantly be tweaking those success conditions as we discover new metrics for measuring human flourishing.


It is basically hacking the algorithm. You seek to bias its output toward some other goals versus what the optimal math would give. One might argue that developing such algorithms for good intentions runs the risk of providing technologies for negative intents. But I think the research on the negative side is generally way ahead anyway, as security is so important.


There is a wide range of what people think empathy is and how to express it in their actions. I think the problem with AI is that it allows people to hide behind it so they don’t have to take responsibility for the actions of the AI they designed and paid for.


AI doesn't think. It's just a tool based on math.


"Just" is something of a sneaky word: https://alistapart.com/blog/post/the-most-dangerous-word-in-...

But by some measure, human thinking is itself math: emergent pattern-recognition traveling through cortical hierarchies and deeply nested layers of abstraction and metaphor. While it's indisputable that there's a qualitative leap between something like computer vision, and the basic reasoning capacity and perceptual systems of a human (even a toddler), there isn't anything intrinsically magical about atoms rather than bytes. It appears to simply be a factor of scale, and the billion-year-old "legacy code" gifted to our wetware by iterated selection pressures.


Humanity doesn’t think. It’s just a bunch of neurons governed by basic physical laws giving rise to some emergent outcome.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: