
AI thinks like a corporation (2018) - samclemens
https://www.economist.com/open-future/2018/11/26/ai-thinks-like-a-corporation-and-thats-worrying
======
Barrin92
There's a great article by Ted Chiang about this[1], comparing the psychology
of the paperclip AI scenario to the single-minded "increasing shareholder
value" mentality of business, in particular, silicon valley startups.

He points out that the companies of the very people who fear the ai-apocalypse
do already what they're accusing AI of, pursuing growth and 'eating the world'
at all cost only to maximize narrowly defined singular goals.

[1][https://www.buzzfeednews.com/article/tedchiang/the-real-
dang...](https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-
civilization-isnt-ai-its-runaway)

~~~
madaxe_again
I’d actually turn both this and the original article around, and say that
corporations behave like AIs. They are artificial. They are (variably)
intelligent.

The functions of processing inputs to create meaningful outputs are carried
out by neurons made of meat (human employees) and of silicon. These processes
are governed by rules and metaprocesses, which the entity continuously
optimises and improves, in order to further its own growth and to improve its
fitness for its purpose - typically, enhancing shareholder value. I cannot
think of a single criteria for intelligence or even life that a corporation
does not possess, from independent action to stimulus response to reproduction
to consumption to predation.

I think the first true hard AI will emerge accidentally, and it will be a
corporation that has largely optimised humans out of the loop - but even with
humans as a component of a system, a gestalt, an artificial supra-
intelligence, can still emerge.

This also neatly sidesteps the whole question of “should AIs have rights?”, as
corporations are already legally persons.

~~~
qznc
I agree:
[http://beza1e1.tuxen.de/companies_are_ai.html](http://beza1e1.tuxen.de/companies_are_ai.html)

It leads to an interesting question: Is a corporation controlled by its CEO?

~~~
madaxe_again
Nice - that overlaps pretty much perfectly with my reasoning!

In my view, no. The CEO is at best a race condition breaking function, at
worst, a parasitic infection.

If corporations are self-optimising AIs, then they will optimise CEOs out of
existence if they aren’t conducive to fitness.

~~~
Joking_Phantom
Does that not suggest that CEOs are worth their cost to the vast majority of
companies that retain them? What companies operate without a de facto CEO?

~~~
killjoywashere
Well, CEOs may be inherited from non-optimal initial conditions of early
ancestors and simply haven't been replaced by a better competitor yet. Or they
have been and we don't understand that yet. Naturally occurring genomes encode
for processes that result in all sorts of things that don't actually have to
occur. The appendix, the frenulum, most of the DNA itself, as far as we can
tell. Nature's full of random dead-ends that keep propagating because the
system hasn't gotten over the barrier of the next local minimum yet.

~~~
red75prime
Chicken can run faster if it had optimized its head away. Because it can
ignore limiting factors like potential obstacles in its current course. But
for how long?

Actual problem solving in a corporation is done by humans. Corporation policy
can provide a set of incentives to align goals of its employees with the goal
of the corporation and a structure to more or less effectively combine
specialists' efforts. But someone had to think about the corporation as a
whole and change its policies if the need arises.

Cephalization is a known evolutionary trend after all.

------
defanor
There's an article with a similar premise by Charlie Stross [1]. Though both
seem to apply to basically any software, not just statistical (or whatever
currently falls under "AI"), and perhaps (AIUI) can be generalized to just
(over)simplified models.

[1] [http://www.antipope.org/charlie/blog-static/2018/01/dude-
you...](http://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-
the-future.html)

------
aaron-santos
For another take on this corporations ≈ ai idea, I cannot recommend enough
Charles Stross' 34C3 talk on this very topic.
[https://www.youtube.com/watch?v=RmIgJ64z6Y4](https://www.youtube.com/watch?v=RmIgJ64z6Y4)
(picks up approx 10mins in).

------
tartoran
This could go wrong in so many ways. Perfect way for un-accountability: the
algorithm decided and we don't know why because it isn't transparent.

But there's some good potential too. It is a tool, it depends how we wield it
or more precisely "who".

~~~
AndrewKemendo
Certainly more quantitatively scruitable than any human system.

A human can tell you a more or less convincing narrative about why they did
something, but you have fewer independent and objective ways to measure that
than you do basically any conceivable computing system.

~~~
gigicontra
Yes sure, but the data and interpretation might be meaningless when explained
back to humans, it's like a an intellectual showing off with terms and
formulas:

"the algorithm did this because ..." followed by a debatable and
unintelligible formula + data + stats that nobody would really understand and
a new science needs to be invented to understand and so on

------
dr_dshiv
I think the common-man conception of AI is really flawed. Consider that
autopilot -- clearly a form of AI -- was invented in 1914. AI is far older
than computers and corporations are a clear and excellent example. It's a
slippery slope -- and I think it is worth sliding all the way down. Once we
realize how completely pervasive AI is, I think we have a much better
understanding of how to govern it in the future.

Personally, I find cybernetics to have a much stronger philosophical basis
than AI; I particularly like how Norbert Weiner described how cybernetic
feedback loops can lower local entropy.

~~~
djs070
I can't think of the definition of AI you could be using to say the autopilot
is an AI. Autopilot didn't teach itself to fly the plane - it was explicitly
programmed to do so.

~~~
drdeca
Did Deep Blue teach itself to play chess?

I’m not sure that “self-teaching” should be considered the defining criterion
for AI.

~~~
shmageggy
It's not. Unless you want to rename decades of symbolic AI, probabilistic AI,
and everything else that's not machine learning.

------
snidane
Us humans like to think that we are the ultimate organisms and everything else
is either the material that composes us (cells and tissue) or our products
(corporations, cities).

Organisms are scale free systems. It's only our human bias to see humans as
importany. If aliens came to visit the Earth, they most likely see this planet
covered with massive algae which glow at night (ie. our cities). Cities at
night are more noticeable from space than the cells which they are composed of
- tissue cells (houses) or blood cells (cars and pedestrians).

For some reason all aliens in movies are pictured as these humanoid and human
sized creatures which is most likely due to the same human-centric bias.

~~~
thfuran
Even in books, where the limitations of costumes and sets are less relevant,
most aliens are at least somewhat humanish. Though there are certainly some
more exotic ones like the pattern jugglers, First Person Singular, or the
Prime.

~~~
bwi4
An aside, but Adrian Tchaikovsky‘s Children of Ruin/Time center on intelligent
spiders and mollusks. Earth-centric as they were bio-engineered from Earth
native species, but an interesting take on non-human species.

~~~
livueta
If we're name-dropping, the Vernor Vinge pack-mind dog things are another fun
example of non-human races with actually non-human modes of cognition, even if
they're not as far removed as Solaris/Revelation Space sentient oceans. In
contrast, although I really liked A Deepness in the Sky for other reasons, the
spiders ended up feeling way too much like humans in funny bodies rather than
any kind of actual non-human intelligent species.

------
metabagel
Obligatory plug for The Corporation (book & film)

[https://thecorporation.com/](https://thecorporation.com/)

The similarity I see between AI and corporations is single-mindedness. They
ignore anything outside the scope of their interest, "externalities" in the
parlance of The Coporation. Those externalities can have very real
consequences.

------
ld-50-agi-v3
My advanced image recognition matched that picture to:

1) Rectilinear behive (95% confidence) 2) Rectilinear jail (5% confidence) 3)
Ancestral forest home (0% confidence)

------
Animats
ML definitely doesn't. Not a hierarchical system at all. There have been AIs
which work like hierarchical control systems, but ML systems are almost the
anti-pattern to that.

~~~
jeromebaek
ML is hierarchical. the upstream layers categorise more abstractly and
downstream layers categorise more specifically.

~~~
FreakLegion
Not all deep learning is hierarchical and not all machine learning is deep
learning.

------
blackbear_
Research needs money.

Big money tends to go towards what generates great returns.

Thus a lot of research is devoted to dull tasks that bring business value.

No need to involve AI, or research for that matter.

------
rotrux
AI thinks like a corporation in the sense that neither really thinks. One is
misleading buzzword for modern computer-driven pattern recognition & the other
is a legal label for a group of people trying to make money together. This
premise is kind of silly.

------
mamon
This is understandeable: Both AI and corporations are psychopaths - they have
no emotions to get in the way of rational thinking, with ultimate goal of
maximizing their personal gain.

Question remains: how do we teach AI empathy ?

~~~
frandroid
We teach AI empathy by programming it with the heurestics of care--optimizing
for human well-being instead of profit. A difficult thing to achieve when the
entities paying for the design of AIs are themselves driven by the profit
motive, and for-profit AIs are self-perpetuating.

------
avocado4
AI doesn't think. It's just a tool based on math.

~~~
lukifer
"Just" is something of a sneaky word: [https://alistapart.com/blog/post/the-
most-dangerous-word-in-...](https://alistapart.com/blog/post/the-most-
dangerous-word-in-software-development/)

But by some measure, human thinking is itself math: emergent pattern-
recognition traveling through cortical hierarchies and deeply nested layers of
abstraction and metaphor. While it's indisputable that there's a qualitative
leap between something like computer vision, and the basic reasoning capacity
and perceptual systems of a human (even a toddler), there isn't anything
intrinsically magical about atoms rather than bytes. It appears to simply be a
factor of scale, and the billion-year-old "legacy code" gifted to our wetware
by iterated selection pressures.

