
Capitalism is a paperclip maximizer - mcscom
http://thoughtinfection.com/2014/04/19/capitalism-is-a-paperclip-maximizer/
======
madaxe_again
Capital is not what most people conceive of it to be. Capital is commonly (and
correctly) viewed as a "store of wealth", or as "the crystallisation of
labour-time" \- but capital comes in different flavours, and when you deal
with capital in the form of goods which provide an incremental benefit to the
user, what you have is indeed capital, but when you deal with money... it's up
to you whether it's capital or not.

Often, people confuse _money_ with _capital_. Money is an exchange mechanism
which facilitates the transfer of value between parties through a common unit
of exchange. Money can be viewed as a subset of capital (i.e. as a sedentary
store of value), or it can be viewed _as the destruction of the benefits of
labour_ , as it converts an active instrument (i.e. the work I am doing, or
the goods I have produced) into a sedentary instrument which provides no
incremental benefit, _unless it is spent_.

So - how this relates to the article - capitalism is a system which currently
optimises for money. When you take a step back and consider that by converting
labour and value into money, you are effectively freezing the benefit of that
labour away, accruing money as an end of its own ceases to make sense, as much
of this value ends up lying in the ledgers of corporates, governments, or
wealthy individuals, doing nothing for anybody other than perhaps dealing with
some of the more fragile elements of human psychology (security).

We have let a means become the end, and have forgotten that the Purpose of all
of this is to promote human happiness - I mean, what other possible Purpose
_can_ there be?

~~~
bko
Some would disagree that the purpose of existence is to promote happiness.

Robert Nozick offered this thought experiment to which I would not opt to
maximize happiness:

> Nozick asks us to imagine a machine that could give us whatever desirable or
> pleasurable experiences we could want. Psychologists have figured out a way
> to stimulate a person's brain to induce pleasurable experiences that the
> subject could not distinguish from those he would have apart from the
> machine. He then asks, if given the choice, would we prefer the machine to
> real life? [0]

[0]
[https://en.wikipedia.org/wiki/Experience_machine](https://en.wikipedia.org/wiki/Experience_machine)

~~~
kaffeemitsahne
Why wouldn't you?

~~~
cylinder
Replace drugs with machine and we already have such a thing. However the
pleasure from drugs is stigmatized and prohibited, and the addiction to the
pleasure machine is seen as life destroying.

~~~
sangnoir
IMO - it's not the pleasure itself that is stigmatized, but the side-effects.
Side effects including deteriorating health of the user, as well as the
occasional resorting to immoral or even criminal means to feed the habit.

There's also a whole dog-whistle aspect to the 'war on drugs' (e.g. massive
differences in sentences on crack vs. cocaine offences)

------
plutooo
It's a faulty analogy. In fact, a paperclip maximizer goes against every
single principle of capitalism out there.

The paperclip maximizer is bad exactly because it isn't based on capitalism;
it will maximize paperclips regardless of whether someone values them or not!
At _any cost_, it will maximize paperclips.

Markets, on the other hand, have the goal of providing products/services that
people are willing to pay for. It adapts. The more people value it, the more
capital it will attract, and the more resources will be allocated to it.

~~~
irln
You've identified one of the key defects in capitalism today...the disconnect
of a "true" market where value is measured by folk's willingness to pay for
things.

~~~
plutooo
So, the problem is not enough capitalism? ;)

~~~
irln
Not more capitalism :), just determining a better way to manage the money
supply then a few folks on the board of central banks. If we could somehow
connect an increase or decrease of the money supply to a classic market
definition (e.g. people buying and selling) I think it would make a very
positive impact to everyone.

------
VLM
The article irrationally assumes the best effort maximizer has no competition
and long life, which we have not seen in endless lower powered
implementations, and there seems to be no trend so unless it starts going
exponential...

There might be a barrier here, like speed of light or various mathematical
limits that formalize the old saying of too many cooks spoil the broth. The
required complexity of a uber-maximizer to handle the complex universe means
its unprovably complicated and unreliable. Or you'd need a bigger computer
than the universe to answer the halting question for a universe controlling
control loop, or even something much smaller like a mere multinational
corporation. Or another way to put it is a predatory competitor maximizer
would inevitably be better at predation than its victim that focuses more on
some arbitrary goal than on avoid becoming a meal, so in a hyper optimized
playing field all you'd get is ever better predators eating each other and
eating weaker non-predation focused orgs, or basically the modern
business/finance system.

------
marcosdumay
Wait, what?

Capitalism maximizes "giving people what they want". This is much closer to
the benevolent AI models than to paperclip maxizers.

~~~
lukifer
Implicit in the statement "giving people what they want" is a set of
assumptions about what "want" means. Do I want life-saving medicine? Do I want
a soul-sucking job that pays the bills? Do I want to be de-facto-required to
participate in Facebook in order to socialize? I may voluntarily choose all of
these things, and while that is unambiguously better than being forced to do
them by violence, that's subtly distinct from "wanting" them.

There are also varied spectra of wants: on some level, I want to pursue self-
actualization goals that are years away. On another level, I want to step out
of the rat race and read a good book. On yet another level, I want to watch
TV, eat candy, and click a button that delivers a dopamine reward directly to
my brain. These wants are frequently in conflict, and capitalism is more
incentivized (or capable) to satisfy some of those than others. (I remember a
good PG essay about the phenomenon of short wants vs. long wants, but I can't
find it; neighboring ideas are found in
[http://paulgraham.com/addiction.html](http://paulgraham.com/addiction.html)
and
[http://paulgraham.com/distraction.html](http://paulgraham.com/distraction.html))

I don't think markets are a bad technology, any more than any other
technology. But the paperclip analogy fits: people do want paperclips, but
they also want a variety of other things that aren't paperclips. If we create
social machines that optimize some wants over others, a plethora of human
needs and desires may be drastically under-supplied as an unintended side
effect.

~~~
jerf
"Implicit in the statement "giving people what they want" is a set of
assumptions about what "want" means."

And while that's all very interesting and important in the abstract, it's
worth double-checking that you don't get yourself lost in the weeds... which I
think you did... and forget that the paperclip maximizer is scary precisely
because it has _no_ concept of wants in it. With capitalism, wants can be
changed in real time, and the system reacts. The paperclip maximizer does not.
They are fundamentally different.

There is a memeset that strongly encourages you to fling up whatever word
smokescreen is necessary to ensure that nothing positive is said about
Emmanual Goldstein, today played by Capitalism, but it's still just a word
smokescreen. There is a fundamental different between a paperclip maximizer
and capitalism, and you should not throw yourself into a word tizzy until
you've confused your rational brain enough that you can fall back on the
comfortable emotional judgments about capitalism. Not every bad thing that can
be said about capitalism is true simply because it's a bad thing said about
capitalism.

~~~
lukifer
You are right that the analogy breaks down at the assumption that humans would
allow the paperclip machine to run amok; human wants do powerfully change and
influence the market machine. At the same time, the abyss gazes into us, and
human wants change in response to the machine as well.

I'll give an example that intersects the Pavlovian scare words of both
"capitalism" and "government": the Prison-Industrial Complex. It's reasonable
to want to remove members of society who are dangerously violent, or
"cheaters" of sufficient scope. Yet iterate this desire over enough time, and
it takes on a life of its own: politicians who want to look tough on crime,
parents who want their fears assuaged, private companies who want to profit
from correctional tax dollars, and most insidiously, the swaths of police,
prison guards, and support staff who want paychecks, benefits and job
security.

The final product is something we don't actually want: the most populous
prison system in the world, which is massively expensive, socioeconomically
predatory, and fails to rehabilitate most of its inmates. Yet there are enough
stakeholders who _do_ want the institution to persist for reasons of personal
benefit (and enough taxpayers who want to imagine a cute moralistic story and
are willing to pay for the luxury of ignorance), that a massive machine of
oppression is created and sustained out of a simple, reasonable human want.

I'm not intrinsically anti-capitalist; I'm in the camp of Jaron Lanier, in
that money, markets, and corporations are all technologies, and neither good
nor evil. But technology should serve humans, and not the other way around;
that means paying attention to the power of unchecked, autonomous feedback
loops and their power to influence human behavior and our various social
ecosystems.

~~~
eli_gottlieb
>It's reasonable to want to remove members of society who are dangerously
violent, or "cheaters" of sufficient scope. Yet iterate this desire over
enough time, and it takes on a life of its own: politicians who want to look
tough on crime, parents who want their fears assuaged, private companies who
want to profit from correctional tax dollars, and most insidiously, the swaths
of police, prison guards, and support staff who want paychecks, benefits and
job security.

Of course, this assumes all the actors were initially well-intentioned, which
they weren't. Plenty of people supported harsh policies of criminalization and
"justice" because they wanted to come down hard on the dark-skinned and the
poor.

------
Rhapso
Economic systems are optimizers (each with slightly different objective
functions).

One of the first things we learn about optimizers is that they are rarely
globally suited to solving problems, and that the best technique is to
identify and apply the locally optimal solution over the range of input.

TLDR: We need to admit that there is not a globally applicable economic
system, and learn to apply different models where they are most
efficient/least disastrous.

------
bmmayer1
Capital doesn't maximize capital; it maximizes productivity. In absence of
controls, capital naturally flows to productivity in order to be maximized by
the holders of capital; but only because productivity creates wealth, not the
capital itself.

~~~
biomene
Capital only maximises productivity if it gains an advantage by doing so. In
other words, productivity is a means for Capital to maximise itself, not an
end. There are infinite examples of this: shrimp peeling machines have existed
for ages, but shrimp is still mostly manually peeled in Thailand because it
has a cheap workforce.

~~~
legitster
The workforce in Thailand is cheap because there is not enough capital. The
answer will never be less capital.

~~~
Sakes
He's not making an argument for less capital. I think he needs to clarify his
definition of productivity.

------
epx
People fantasize too much about capitalism. The definition of capitalism is:
capital and income are expressed under the same monetary unit (like mass and
energy could be):

income = capital x interest rate

The former alternatives was to put capital in a separate "namespace". In
Middle Ages, only nobles could have land and land was the only capital that
could exist ("usury" was forbidden, you know). Then came mercantilism, that
defined wealth as a quantity of gold that one had; and so on.

Capitalism will certainly pass, but hopefully not going back to restricting
access to capital. I imagined something like a "complex" interest rate, with
the imaginary part representing natural resources, while the real part is the
human effort that is the only think currently measured by monetary unit. But
bimetalism failed everywhere it has been adopted, I think that the imaginary
part of a price would become dominant since natural resources are becoming the
limit of economic output.

------
mhb
What we really want is a happiness maximizer but no one knows how to make one
of those which leverages the innate characteristics of people. Capitalism
seems like a pretty good system to pragmatically approximate this happiness
maximizer.

Yes, there are negative externalities to capitalism including possibly
maximizing short term happiness at the expense of long term. Is there
something better?

Framing this in terms of the paperclip maximizer made this idea appear more
interesting than it turned out to be.

------
electricblue
War can only destroy capital, perhaps the most breathtaking destruction of
capital in history was WWII. So War would be antithetical to this hypothetical
capital maximizer and it would be it's among its top priorities to avoid it.
Similarly, the market system based on human rationality is far too volatile
for our capital maximizer, it would look to replace that ASAP.

------
electricblue
Thinking about this some more, it would seem that if we really have a Capital
Maximizer then the system's process toward its objective (creating capital) is
highly inefficient. If your goal is to create capital in the world you need
better infrastructure, after all what capital is more useful than that which
can be easily used to create more capital? Long term, if you're a Capital
Maximizer trapped on this planet, you only have a small finite amount of
resources than can be converted to capital, you would want to spread out to
other planets very soon. These are obvious things we aren't doing very well.

~~~
eli_gottlieb
There are currently a few rules crippling the Capital Maximizer:

1) Don't think in a longer term than five years. That's _risky_.

2) Labor is the worst expense to have, so minimize the amount of expensively
useful work that has to be done.

3) Collude with others when it's gainful, but don't _coordinate_ with others
to get around rules (1) and (2). Coordination is _for commies_.

"It does often seem that, whenever there is a choice between one option that
makes capitalism seem the only possible economic system, and another that
would actually make capitalism a more viable economic system, neoliberalism
means always choosing the former. The combined result is a relentless campaign
against the human imagination." \-- David Graeber

------
unoti
The root problem here is not economics, it is basic human nature. Humans have
forever been seeking to increase their power base at the expense of others,
and been corruptible by greed. This is not new, and it is not limited to
capitalism. It's basic human nature, and it's one of our key weaknesses.

~~~
thiagoharry
I'm not seeking to increase my power base at the expense of others. If I would
get less money doing something that would make the world a better place, I
would happily choose this. Am I not human?

I see this kind of answer as a kind of explanation widely disseminated to
justify our economic system as inevitable, but which have very little
scientific basis. For me it's wishful thinking.

The kind of economic growth observed in capitalism and some of its
characteristics like the cyclic crisis not related with natural changes like
climate is something unique to our civilization. And it's a very new thing at
the human history (less than 300 years). What will be the effects of these
things at long scale is something still to be comprehended.

------
ArkyBeagle
Capitalism depends on price theory, which predicts that making too many
paperclips will cause them to be in glut and the price will drop. So a
paperclip-maximizing AI might oughta be built to take that into account.

~~~
dredmorbius
Point missed.

It's not that any one product is being produced. "Capital" (or profit or
wealth) here is the combined market aggregate of _all_ production. The problem
is that this too, as with any single optima process, leads to madness.

In the case of capitalism, ecternalities, equity/equality, sustainability, and
systemic global risk are all undervalued, and simple short-term profit
maximisation tends strongly to "underproduce" these.

~~~
ArkyBeagle
I get the point - it's necessary to reanimate the mouldering bones of Malthus
now and again. I just disagree with that because the emiprics of the case are
pretty lopsided against him. He could not have known about engines. Which now
appear to cause one whale of an externality.

The only thing Capitalism did was replace Mercantilism at scale. You had
production before.

What's weird is that economists talk about ways to actually price things like
externalities, equality, sustainability and/or risk. That might be madness or
it might be reason.

If we can't price them, then are we not dropping the checkbook and reaching
for the sword?

------
nevinera
_Corporations_ can be usefully described as 'profit-maximizers', but
capitalism itself doesn't have a coherent behavior - you can't describe it as
any kind of AI.

~~~
Sakes
AI is just a subset of intelligence. You absolutely can describe capitalism as
an intelligent system since it is primarily made up of units (people) which
are intelligent.

If capitalism didn't have a coherent behavior, then how could we ever rely
upon it as a primary driver of our economy? It must be coherent and on some
level understandable for us to make predictions around it.

~~~
nevinera
You can call an ecosystem 'intelligent' if you want to, I suppose, but there's
not really any way to discuss it as a 'maximizer' when it has no goals and no
intentional behavior. You could as easily describe water as a 'gravitational
potential' minimizer - it has a collective aggregate behavior that you can
characterize, but that doesn't make it intelligent by most definitions of the
word.

>If capitalism didn't have a coherent behavior, then how could we ever rely
upon it as a primary driver of our economy?

It _is not_ a 'primary driver' of our economy, it is a _description_ of our
economy. The primary driver of our economy is corporations - entities obeying
a profit motive. They have behaviors, intentions, and goals - they attempt
maximize shareholder profits. "Capitalism" doesn't intend anything, have any
goals; it doesn't even have any non-emergent behaviors.

~~~
Sakes
Every time I start writing a response to this, I can't help but feel it will
simply be misinterpreted. I thought my original response was pretty succinct
and clear. If you'd like to continue, please select one discrepancy and I'll
be happy to discuss it further with you.

~~~
nevinera
Our disagreement may be definitional - your original response was succinct and
clear, but did not make an argument, merely an assertion.

A) I see no reason that 'being made up of units which are intelligent' ought
to classify a system as intelligent.

B) I see no way in which you can usefully talk about anything without
intentional behavior as a 'maximizing intelligence'.

You can't call 'capitalism' the 'driver of our economy' for the same reason
you can't call 'ecology' the 'driver of natural selection' or 'conflict' the
'driver of our military conquests'. You are conflating a system with the units
in the system, and talking about an ecology of intelligent units as if it is
satisfying some behavioral utility function.

'Capitalism' doesn't have a utility function, because it doesn't have
_decision-making capacity_ , which is pretty much the only prerequisite for
intelligent behavior. It doesn't _drive_ our economy, it _is_ our economy. The
actors in the system are corporations and humans (and governments, unions, and
a few other united organizational structures), therefore those are the things
which can exhibit intelligence.

~~~
Sakes
I hope you don't mind, but I'm only going to address one thing right now. If
the conversation continues, I will continue to discuss more with you.

Lets start with (A). You are correct, the way I described it does not
explicitly explain how it is intelligent. Lets adopt the strictest definition
of intelligence which is the ability to predict.

When I refer to capitalism here, I'll be referring to capitalism implemented,
not the theory. Capitalism is intelligent. It is made up of people which are
intelligent. These components are at times working together, and at times
working against each other. Much like your conscious mind and subconscious
mind. Conflicts are resolved in the market or through the state via
legislation and trials. The results of components working together and
resolving their conflicts is how capitalism has decided to behave. Having
behavior is not sufficient for intelligence. A virus has behavior but is not
intelligent.

So does a capitalist system make predictions? One example should be sufficient
to make the answer to this question yes. If I can show that a capitalist
system makes predictions, then I will have proven that it is intelligent.

Lets use America for this example. One thing American Capitalism seeks is to
mute the highs / lows of the inherit booms and busts in-order to increase the
level of stability of the economy. The Fed watches the rates of lending /
borrowing to predict bubbles and raises interest rates to slow / stop the
growth of bubbles. Once it has popped, and lending/borrowing slows, the Fed
then lowers interest rates to promote lending / borrowing. I know this is not
new info for you. I just wanted to be thorough in my response.

So American Capitalism predicts bubbles and acts on those predictions. Thus,
intelligent.

You might say the Fed isn't capitalism. Which would be true. But your brain
isn't you yet it is the component which makes you intelligent. The Fed is part
of America's capitalist system.

You might say that capitalism does not involve government. Which I would say,
pure capitalism doesn't involve government, but the active capitalist systems
we have today do.

~~~
nevinera
>If I can show that a capitalist system makes predictions, then I will have
proven that it is intelligent.

Your example applies perfectly well to a localized ecosystem. I can for
example show that a surge in the population of rabbits on a small island
'predicts' a surge of the population of their main predator. That's the island
ecosystem predicting a population bubble and then acts on those predictions.
Intelligent?

That's beside the point though, because your example didn't show 'capitalism'
making a prediction, it showed _the Fed_ making a prediction. The federal
reserve is a functional organization, an entity with goals and intentional
behavior. The Fed is a _perfect_ example of an entity that can demonstrate
intelligent behavior

It is indeed 'part of America's capitalist system' \- again, you fall back on
the assertion that a _system_ is intelligent if the actors in it are
intelligent, the only obvious source of our disagreement. By your logic, you'd
have to consider a baseball game a functional intelligence, a three-legged
race a maximizing entity.

Systems of intelligences _are not automatically intelligent_. It is _possible_
for them to be so; corporations qualify, governments qualify, unions qualify.
In order to have intentional behavior, an entity has to be self-coordinated;
it has to be able to _decide_ to perform an action, and then perform that
action. A market has no coordination, and has no decision-making power -
markets are inherently decentralized in that way.

If you choose to define 'intelligence' such that a thing which cannot make
decisions or choices, has no goals or intentions, and has no self-impelled
behavior can qualify, then your definition of the word is fundamentally
confusing to me, and I'm not really interested in trying to puzzle out what
you mean by it.

~~~
Sakes
Nice, you're back.

> I can for example show that a surge in the population of rabbits on a small
> island 'predicts' a surge of the population of their main predator. That's
> the island ecosystem predicting a population bubble and then acts on those
> predictions. Intelligent?

So I am assuming you are claiming to be part of the ecosystem?

> That's beside the point though, because your example didn't show
> 'capitalism' making a prediction, it showed the Fed making a prediction.

Not besides the point, it attacks a necessary part of my argument.

Anyways, you have made some good points. I'll give it some thought and get
back to you.

~~~
nevinera
>So I am assuming you are claiming to be part of the ecosystem?

In general, I am part of many systems (eco- and otherwise), but no; that
section of reply was an argument against the idea that being able to 'predict'
the future was a reasonable way to define intelligence. It doesn't really
change the argument if you stick me on the island though - I behave
intelligently, and that affects the behavior of the ecosystem, but that
doesn't cause the ecosystem be 'intelligent'.

It's interesting to have a debate this slowly, I feel like it's unusually
civil :-)

~~~
Sakes
Ya, I've been enjoying it :) I was worried you hadn't based upon your previous
comment. I'm sorry I haven't returned to this yet, but will soon I promise.

I think we should try to find out where we have agreement and where we begin
to diverge. That should make the remainder of this discussion tighter.

~~~
nevinera
I believe our divergence is somewhere under the definition of 'intelligence'.
The word _is_ very ill-defined, to the point that we are reduced to describing
it functionally in many cases, as with the Turing test.

>Human intelligence is the intellectual capacity of humans, which is
characterized by perception, consciousness, self-awareness, and volition.

That's a very normal definition of intelligence. AI is defined (among many
other way, naturally) as "the study of intelligent agents"; 'agency' is a
requirement for an artificial intelligence.

Since we started with the context of 'paperclip maximizers', I've been talking
about intelligence in that context - in order to be studied as an
intelligence, a thing must have 'agency', the ability to intentionally act. A
thing that doesn't have agency can still have behavior, and that behavior can
still be studied, but the behavior is _emergent_ , not intentional - the
system does not have agency unless it is capable of having goals, and of
making decisions to achieve or progress toward them.

In particular, capitalism isn't 'trying to maximize capital' \- that's an
effect it's (supposedly) having, but not an intentional one. It's a pretty
clear emergent effect - if many of the actors in a system are trying to
maximize their personal capital, then the system turning out to maximize net
capital should surprise nobody. It's very equivalent to calling a diffusion
chamber a 'maximizer' of entropy - a diffusion chamber _does_ maximize
entropy, but by calling it a 'maximizer' (in the context of 'paperclip
maximizer'), you would ascribe agency to the chamber - it's not _trying_ to
maximize entropy, that's just something that it happens to _do_.

------
tpeo
I disagree with much of what was written, but I find it hard to address what
seems, to me, more like a long analogy than anything else.

"Capitalism" doesn't maximize "capital", in any sense. The word "capital"
generally refers either: to some asset that might be used in production, such
as a piece of machinery; or structures that might render services over a time,
like buildings; or to some sum which might be used in buying either of these,
through financial services.

Now, "capital" wouldn't be maximized by "capitalist" firms. Why should they?
Firms produce additional capital units so long as they can be sold above
production cost, but they won't produce indefinitely. They are concerned with
their profits, not with the amount of product. They aren't even concerned with
the products' value, so long as it is above production costs. Hence, there's
no incentive to inflate the capital good prices, which would be necessary to
maximize the amount of money "capital".

Also, an it really be said that capitalism is an adaptive intelligence system?
That, I'm not knowledgeable enough to tell. On the other hand, human
interaction is unstructured enough that it seems unlikely that there's any
goal to such supposed intelligence, or that it has capital-maximization as
it's goal. Rather, I see a collection of different goals, each corresponding
to different agents, and some of which might even be contradictory. The noisy
neighbor, the ambitious monopolist, or the NIMBY all seek to maximize their
private benefit, but often beyond social benefits. Hence I find it hard,
whenever markets fail through the presence of either externalities, imperfect
competition, or adverse incentives, that it ought nonetheless be the work of
some powerful demiurge.

Yet, in so far as it puts into words an anxiety that so many feel, I can't say
much else beyond that I don't believe it. The author certainly proceeds by
analogy when he likens "capitalism" to an artificial intelligence whose goals
themselves mean nothing to humanity at large; and mentions alienating
"monuments" to capital, as if to liken it to a 20th century pharaoh, or to
cruel tyrant. But since so many feel this is really the case, I have nothing
else to say (other than that it's a very vivid picture). I disagree, for
instance, that the majority of goods are worthless, as they ought to have some
value in order to be sold. Even Marx reckoned this much, and thinking
otherwise seems like relying too much in the stupidity of others (which should
never be trivially admitted). That would be the case if capital were built
beyond all possible utility (say, like a bridge between London and NY). And I
don't think it is necessary to make analogies to speak of wealth and income
distribution issues, which are real problems in themselves, and which the
majority of people are already acquainted with. Rather, I think that such
analogies make problems much larger than they ought to be, by giving some
monstrous character. Would be it be easier to tackle inequality by conceiving
it as a result of an incredibly power artificial intelligence? I don't think
so (even if the system is rotten, we'd be waiting on a constructive proof of
one that isn't).

------
transfire
Nailed it.

------
eli_gottlieb
Oh good. The Americans are rediscovering _Das Kapital_.

Forgive the snark, but when I first heard about paperclip maximizers, I
thought, "Oh, you mean like capitalism." That these two things have a likeness
is the reemergence of an idea that had been buried in order to make capitalism
look good.

