
Algorithms Have Already Gone Rogue - steven
https://www.wired.com/story/tim-oreilly-algorithms-have-already-gone-rogue/
======
andrewla
My only objection to this is a semantic one -- the word "algorithm" is not
being well-served here. The correct word for this sort of thing is
"heuristic". The concern isn't that algorithms themselves are incorrect, the
concern is that the problem they are trying to solve is a heuristic one, not a
formal one.

Saying "let's write an algorithm to improve search results" is meaningless;
"let's design and implement a heuristic that improves search results". The
algorithmic part of this is how to efficiently implement that heuristic.

I can usually get through articles like this by silently replacing "algorithm"
with "heuristic"; the problem arises when some articles attempt to draw
equivalencies between "algorithmic" concepts, like running time and space, and
"heuristic" concepts, like optimizing for the wrong thing.

~~~
usrusr
Algorithms running on "social hardware" can be surprisingly formal. A famously
well-documented example are early modern witchhunts. The humorous depiction in
Monty Python and the Holy Grail does a surprisingly good job at conveying the
algorithmic nature.

~~~
pdfernhout
Along those lines, here are some of my comments on this general topic from an
email I posted to the Doug Engelbart Unfinished Revolution II Colloquium in
2000:
[http://www.dougengelbart.org/colloquium/forum/discussion/012...](http://www.dougengelbart.org/colloquium/forum/discussion/0126.html)

===

... I personally think machine evolution is unstoppable, and the best hope for
humanity is the noble cowardice of creating refugia and trying, like the
duckweed, to create human (and other) life faster than other forces can
destroy it. [Although in 2017 I'd add other possibilities like symbiosis or
trying to create friendlier AI as a partner (or at least AIs with a sense of
humor -- see James. P. Hogan's AIs, or the ones like Libbry in EarthCent
Ambassador series, or the Old Guy Cybertank series example), improved
sensemaking through better intelligence-augmenting tools, and trying to help
human society be more compassionate in the hopes our path out of a singularity
will somehow reflect our path going in...]

Note, I'm not saying machine evolution won't have a human component -- in that
sense, a corporation or any bureaucracy is already a separate machine
intelligence, just not a very smart or resilient one. This sense of the
corporation comes out of Langdon Winner's book "Autonomous Technology:
Technics out of control as a theme in political thought".

You may have a tough time believing this, but Winner makes a convincing case.
He suggests that all successful organizations "reverse-adapt" their goals and
their environment to ensure their continued survival. These corporate machine
intelligences are already driving for better machine intelligences -- faster,
more efficient, cheaper, and more resilient.

People forget that corporate charters used to be routinely revoked for
behavior outside the immediate public good, and that corporations were not
considered persons until around 1886 (that decision perhaps being the first
major example of a machine using the political/social process of its own
ends).

Corporate charters are granted supposedly because society believe it is in the
best interest of _society_ for corporations to exist. But, when was the last
time people were able to pull the "charter" plug on a corporation not acting
in the public interest? It's hard, and it will get harder when corporations
don't need people to run themselves.

I'm not saying the people in corporations are evil -- just that they often
have very limited choices of actions. If a corporate CEOs do not deliver short
term profits they are removed, no matter what they were trying to do.
Obviously there are exceptions for a while -- William C. Norris of Control
Data was one of them, but in general, the exception proves the rule.
Fortunately though, even in the worst machines (like in WWII Germany) there
were individuals who did what they could to make them more humane
("Schindler's List" being an example).

Look at how much William C. Norris of Control Data got ridiculed in the 1970s
for suggesting the then radical notion that "business exists to meet society's
unmet needs". Yet his pioneering efforts in education, employee assistance
plans, on-site daycare, urban renewal, and socially-responsible investing are
in part what made Minneapolis/St.Paul the great area it is today. Such efforts
are now being duplicated to an extent by other companies. Even the company
that squashed CDC in the mid 1980s (IBM) has adopted some of those policies
and directions. So corporations can adapt when they feel the need.

Obviously, corporations are not all powerful. The world still has some
individuals who have wealth to equal major corporations. There are several
governments that are as powerful or more so than major corporations.
Individuals in corporations can make persuasive pitches about their future
directions, and individuals with controlling shares may be able to influence
what a corporation does (as far as the market allows).

In the long run, many corporations are trying to coexist with people to the
extent they need to. But it is not clear what corporations (especially large
ones) will do as we approach this singularity -- where AIs and robots are
cheaper to employ than people. Today's corporation, like any intelligent
machine, is more than the sum of its parts (equipment, goodwill, IP, cash,
credit, and people). It's "plug" is not easy to pull, and it can't be easily
controlled against its short term interests.

What sort of laws and rules will be needed then? If the threat of corporate
charter revocation is still possible by governments and collaborations of
individuals, in what new directions will corporations have to be prodded? What
should a "smart" corporation do if it sees this coming? (Hopefully adapt to be
nicer more quickly. :-) What can individuals and governments do to ensure
corporations "help meet society's unmet needs"?

Evolution can be made to work in positive ways, by selective breeding, the
same way we got so many breeds of dogs and cats. How can we intentionally
breed "nice" corporations that are symbiotic with the humans that inhabit
them? To what extent is this happening already as talented individuals leave
various dysfunctional, misguided, or rogue corporations (or act as "whistle
blowers")? I don't say here the individual directs the corporation against its
short term interest. I say that individuals affect the selective survival
rates of corporations with various goals (and thus corporate evolution) by
where they choose to work, what they do there, and how they interact with
groups that monitor corporations. To that extent, individuals have some
limited control over corporations even when they are not shareholders.
Someday, thousands of years from now, corporations may finally have been bred
to take the long term view and play an "infinite game".

However, if preparations fail, and if we otherwise cannot preserve our
humanity as is (physicality and all), we must at least adapt with grace
whatever of our best values we can preserve or somehow embody in future
systems. So, an OHS/DKR [Open Hyperdocument System / Dynamic Knowledge
Repository] to that end (determining our best values, and strategies to
preserve them) would be of value as well.

When aluminum was first discovered around 1827, and for decades afterward, it
was worth more than platinum, and now just under two centuries later we throw
it away. In perhaps only two decades from now, children may play "marbles"
using diamonds, and a child won't bother to pick a diamond up from the street
unless it is exceptionally pretty (although you or I probably would out of
habit -- "see a diamond, pick it up, and all the day you have good luck").

This long essay is my own current perspective on this developing situation,
and part of the process of my formulating my own thinking on these trends and
how I as an individual will respond to them.

To conclude, I think all the "classical" problems like food, energy, water,
education, and materials will be technically solvable by 2050 even if we don't
do much specifically about them (and like hunger are solved today except for
politics). The dynamics of technology and economics are just taking us there
whether we like it or not. Those goods may all may essentially be "free" or
"extremely cheap" by 2050. Obviously the complex politics of these issues need
to be resolved, and the solutions need to be actually implemented. If they are
"extremely cheap", people still need a tiny amount of income to buy them.

Still, I think Doug [Engelbart] is right. We face huge problems that only
collaborative efforts can solve -- especially the problems of intelligent
machines, technology-amplified conflict, and a complete disruption of our
scarcity-based materialistic economic and social systems. These problems dwarf
technical issues like energy, food, goods, education, and water.

The problem has always been, and will always be, "survival with style" (to
amplify Jerry Pournelle). The next twenty years will fundamentally change what
the survival issues are: environment, threats, and allies. They will also very
well change what we value as "style" \-- when diamonds are cheap as glass
[perhaps from nanotechnology], what will one give to impress?

===

~~~
crusty
Just sayin... point me to the person who has the habit of seeing diamonds on
the ground and picking them up (and doesn't work in a strip mine). Habits
aren't somethings we want - they're somethings we do.

------
mathgenius
The financial system is a giant message passing algorithm. It is pretty much
just a min-sum algorithm [1] whose sole purpose is to answer the question
"what should we do?". Anyone who has played around with these algorithms for
solving decoding problems knows that they are fabulously powerful.

But these message passing algorithms have two weaknesses:

(A) When there is more than one solution

(B) When there are small loops in the message network

By far the worst problem is (B): it's a kind of a "corruption" of the network
and causes it to pretty much go off the rails. I think people already do
understand the consequences of these problems in the financial system, but we
don't seem to see how we can just change the topology and/or the messages
themselves, in particular, to try to fix up these self-reinforcing loops. Or:
move away from min-sum towards sum-product (which often works an order of
magnitude better) by perhaps implementing basic income. Etc. Etc.

[1]
[https://en.wikipedia.org/wiki/Generalized_distributive_law](https://en.wikipedia.org/wiki/Generalized_distributive_law)

~~~
platz
> The financial system is a giant message passing algorithm. It is pretty much
> just a min-sum algorithm [1] whose sole purpose is to answer the question
> "what should we do?".

no, no no. To even propose that the answer to the question of "what should we
do?" can be solved by the financial system is laughable; that is pure free-
market absolutism.

The anwer to the question "what should we do?" is _political_ .

Let us not confuse the market with politics.

~~~
platz
Downvoters, explain your logic please

~~~
danso
I assume you're being downvoted because you seem to have misread the author's
statement and created a strawman argument. He posited that the financial
system's algorithm's purpose is to answer, "what should we do?". You seem to
have interpreted that as saying that the financial system _should_ answer that
problem for society as a whole.

~~~
platz
> He posited that the financial system's algorithm's purpose is to answer,
> "what should we do?".

Yes, That is just re-stating what the author wrote, without providing any
clarifying information.

> You seem to have interpreted that as saying that the financial system should
> answer that problem for society as a whole.

Maybe yes that is a maximalist interpretation; what is the alternative
interpretation (it doesn't seem to be present in the author's text)?

> move away from min-sum towards sum-product (which often works an order of
> magnitude better) by perhaps implementing basic income. Etc. Etc.

Referencing 'basic income' here seems like a whole society problem to me,
which has nothing to do with the technical implementation financial payment
gateways, no?

------
jordigh
Is this his new meme-hustling? "Algorithm"?

[https://thebaffler.com/salvos/the-meme-
hustler](https://thebaffler.com/salvos/the-meme-hustler)

I've never liked how Tim O'Reilly frames his discussion around vague terms
like "open" which could mean participatory, transparent, available, or any
other number of vague, feel-good terms. Now he seems to be calling "algorithm"
things like economic models and government policy.

These are widely disparate things, but by using vague terms in different
contexts, he pushes discussion towards the direction he wants to steer it: in
the case of "open", away from free software. In the case of "Web 2.0", towards
anything that involved crowd participation.

With "algorithms", he seems to be wanting to push the notion that technology
is both scary but liberating and we need tech messiahs like Bezos or Musk to
bring it under control.

~~~
tenaciousDaniel
That's been bothering me, the word "algorithm" is slowly becoming known as
this ambiguously scary thing.

~~~
mfoy_
An "algorithm" is simply a way to do a thing based on a set of rules to be
followed... of _course_ it's ambiguous.

Getting scared of anything that generic is silly. Might as well fear the
outside because anything can happen out there!

~~~
AlexandrB
Throughout most of history rules and laws were enforced by a combination of
"the letter of the law" and "the spirit of the law". The latter being
shorthand for the role of human discretion.

Algorithms completely obliterate the latter - increasingly turning previously
flexible systems into the equivalent of zero-tolerance-policy schools, where
human discretion has no role to play. This is a problem because many laws on
the books were written with the assumption that "the spirit of the law" would
be a guiding principle when "the letter of the law" is unclear.

As a trivial example of this going terribly wrong, consider Youtube's
copyright enforcement algorithms. Copyright was clearly designed with many
loopholes for fair use to allow culture to move forward. Youtube's algorithms
ignore all of this, changing the effective meaning of copyright on the site
from one where the rights of the copyright owner are balanced with the rights
of critics, commenters, and other creators to one where the rights of
copyright owners are the only ones that matter.

Now imagine this kind of algorithmic enforcement applied to traffic laws, HR
rules, or insurance policies and you can see why people might be nervous about
"algorithms". Algorithms neither think nor feel and have no empathy. It's the
ultimate actualization of the dystopia in the movie Brazil where the world is
a cold, unfeeling, bureaucratic nightmare. Except where human bureaucrats at
least need to sleep sometimes, computerized ones never rest.

~~~
gmfawcett
I write a faulty policy that harms people. I encode it as an algorithm,
implement it as a program, and deploy it on a wide scale. Now it's automated
and distributed, and it is harming people. Where is the fault --- in the
program? in the algorithm? Or in the policy? Where do we fix the problem?

IMO, we are quick to blame inanimate constructs, when people and their
policies are the source of fault. Vilifying "algorithms" only serves to
distract from root causes.

~~~
AlexandrB
The argument I'm making is that when it comes to "human systems" like
communities it's not possible to write a complete, consistent, and fair policy
that can be unambiguously interpreted (i.e. by a computer). This is why Hacker
News still has moderators and is not strictly governed by algorithms.

"Fairness" has always been heavily contextual, and the idea that it can be
distilled to a matter of "if A and B then C" is folly. Even pure math can't
reach the combination of completeness and consistency you assume is possible:
[https://en.m.wikipedia.org/wiki/Gödel%27s_incompleteness_the...](https://en.m.wikipedia.org/wiki/Gödel%27s_incompleteness_theorems)

~~~
__s
Human judgement can't escape Gödel

------
dcre
Can anyone explain why O'Reilly thinks nobody knew until recently that
companies are biased toward part-time work in order to avoid providing
benefits?

"We can’t see, for example, that the algorithms that manage the workers at
McDonald’s or The Gap are optimized toward not giving people full-time work so
they don’t have to pay benefits. All that was invisible. It wasn’t until we
really started seeing the tech-infused algorithms that people started being
critical."

And here's one that's more subtle, so I don't blame him quite as much, but he
is naive to think "ideas" are what cause corporations to act the way they do.
Material and institutional conditions cause their behavior, which is then
justified after the fact by appeal to shareholder value.

"Somebody planted the idea that shareholder value was the right algorithm, the
right thing to be optimizing for. But this wasn’t the way companies acted
before. We can plant a different idea. That’s what this political process is
about."

~~~
pdimitar
> _Can anyone explain why O 'Reilly thinks nobody knew until recently that
> companies are biased toward part-time work in order to avoid providing
> benefits?_

How do you know he's thinking that? The way he was talking, I read him as
"well it's obvious that nobody has stopped this behavior so it's fair to
assume that not enough people noticed". Wouldn't you agree with that?

> _And here 's one that's more subtle, so I don't blame him quite as much, but
> he is naive to think "ideas" are what cause corporations to act the way they
> do. Material and institutional conditions cause their behavior, which is
> then justified after the fact by appeal to shareholder value._

That was the only part of the interview where I strongly disagreed with him --
and you're right. It's not about ideas; there are a lot of people out there
who are extremely good in bean-counting, micro-management, and of course awful
at promoting a positive work environment. They will never change. Only
regulators can limit them a bit, if even that.

~~~
JustSomeNobody
> How do you know he's thinking that? The way he was talking, I read him as
> "well it's obvious that nobody has stopped this behavior so it's fair to
> assume that not enough people noticed". Wouldn't you agree with that?

I wouldn't. Anyone... everyone... who's worked retail _knows_ this. Anyone
who's worked retail management knows this because when you ask to hire people
you're told to hire part timers. Two part timers is always better than a full
timer, you're told. This is NOT invisible to anyone. It's simply unspoken.

~~~
sukilot
> you're told

> you're told

> It's simply unspoken.

~~~
JustSomeNobody
Not the same thing.

You're told to hire part time. You're not told it's because the company
doesn't want to pay them benefits. It's implied, but you're not told.

------
jpster
>We had plenty of bias before but we couldn’t see it. We can’t see, for
example, that the algorithms that manage the workers at McDonald’s or The Gap
are optimized toward not giving people full-time work so they don’t have to
pay benefits. All that was invisible. It wasn’t until we really started seeing
the tech-infused algorithms that people started being critical.

I’m not sure how this can be said with a straight face. An algorithm was
really not needed to perceive this and it’s insulting and strange to suggest
it.

~~~
vpribish
“Tech-infused”. Like tech is an herb or a spice? Surely a sign that there are
no coherent ideas to be found from that writer.

------
zghst
"why capitalism is like a rogue AI"

These people seriously need to take a step back. At some point you cross the
bridge between reporting and actively advertising someone's personal views.

I can't ever trust or read people who constantly try to push an agenda, it's
disingenuous. Even more so, when reporting, engaging in personal exchange
without the discussing data and statistics, i.e. the facts, you are just
allowing yourself to become a personal blog of someone else.

Reporters are supposed to fact check, look for concrete evidence in someone's
statements, hold people accountable to their words, and yet certain people get
a pass all the time, even the star treatment.

~~~
saulrh
Yes? This is an interview with an author that was conducted specifically to
talk about the book and the material in it. What did you _expect_?

Unrelated, I'd like to see your argument against that particular line. IMO the
comparison is an excellent one; its only issue is that the chosen scope ("the
financial market") is too small. Corporations and other bureaucratic entities
like governments are powerful cross-domain optimizers with utterly alien
cognitive processes and goals. Intelligence, certainly, artificial, might as
well call it that.

~~~
edanm
Calling financial markets AI is wrong in pretty much the only way a word can
be "wrong": It's not what most people mean when they talk about AI.

This makes it really easy to make statements that sound deep and meaningful,
but really aren't. E.g. "I'm not worried about Artificial Intelligence - we
already have artificial intelligence, its called a company. Companies are
artificial, and they behave intelligently".

This just isn't what people are worried about. What people are worried about
is:

1\. Soon we will be able to create software/robots that replace tons of human
jobs. This has _nothing to do_ with "companies as an AI".

2\. A super-intelligence will be created that is vastly smarter than any
human, and can make itself even smarter, but will have different goals than
humanity. Again, this is only very thinly related to the "companies as AI"
spiel (companies are not superintelligent, they don't _actually_ have coherent
goals of their own).

~~~
notthemessiah
It's not that companies are "operating intelligently" it's that they aren't:
they're operating on a principle that maximizes profits (and ROI for
shareholders) at the expense of everything else, that's the central guiding
principle, and nobody at a publicly traded firm can oppose it successfully,
without being voted off the board by shareholders. It's effectively an
algorithm that delegates tasks to human operators, and automation is slowly
replacing the human component.

------
egypturnash
"I’m not sure that Jeff would make a great president, but he might."

So let me tell you about my experience with Amazon Fulfilment. I was gonna pay
Amazon, who has this huge expertise in packing and shipping stuff, to fulfil
my Kickstarter. I'd made a large, delicate art book. Amazon, in their infinite
wisdom, stuffed them in bubble wrap envelopes and dropped them in the mail.
They were getting bent, the envelopes were getting ripped up, it was a mess.

I spent a month in customer service hell e-mailing someone who was following a
script that said I would have to turn on something they called "prep", which
would ask them to look at it and package it better. Three times I checked that
this was set, and three times I sent myself a test book that came in a bubble
mailer.

Finally I got clued in that there is a high-level support team that you access
by sending a complaint directly to Bezos. This person, after some back and
forth, ultimately informed me that due to their internal systems, it is
_completely impossible_ for them to put a book in anything _but_ this level of
shitty packaging; "prep" is just not a process that can ever _happen_ to a
book.

They covered shipping my books out to someone actually capable of finding
sensible boxes and shipping them. And they intimated that someone maybe lost a
job over this. But changing this, they said, might take on the order of years.

I'm not sure I want a man who presides over a system like this running the
country.

Especially given that O'Reilly points out that the "rogue algorithms" of the
title are corporations, and the only reason Amazon is headquartered in WA is
that the tax was lowest there...

------
jacknews
"Our fears ultimately should be of ourselves and other people."

Indeed. The short-term fear at least should not be about machine overlords,
but about how people in power use AI to increase their power and/or make life
worse for everyone else.

------
peterwwillis
Have any of these tech "visionaries" (aka millionaires that we pretend have
more insight than non-millionaires) considered that at the same time that
we've increased our wealth we've also done more damage to our environment? If
the cost of increased wealth is decreased habitat, will peak wealth result in
the death of nature?

Aside: people are still comparing AI to a machine that feels, and this is so
stupid it boggles the mind. The machine that creates paper clips will not kill
off anyone who tries to turn it off because it does not have self-
preservation, which is a system dependent on fear of death. Machines do not
fear, and even if they did they would not fear death, because they have solid
state. Bio systems are walking RAM disks.

Algorithms are basically math problems. The financial system and government do
not work based on math problems, they work based on emotional instability.
Seriously. Both these systems are driven almost entirely by the feelings of
humans. They aren't algorithms, they are shitty biological systems that don't
make mathematical sense at all.

~~~
victorNicollet
If a good algorithm optimizes for "quantity of paperclips produced", then it
would recognize that "prevent humans from turning me off right now" is an
optimal strategy. No fear of death involved, only pure rational optimization.

~~~
peterwwillis
The thought experiment is not realistic, because no intelligent system would
think it could produce paperclips infinitely, and it would already have an
"off feature" which was designed for a purpose (maintenance, update,
replacement), and was intended to be used. An intelligent system would not
reject proper use.

We already replace humans at their jobs all the time and they only very rarely
kill their masters to prevent it.

~~~
ric2b
> because no intelligent system would think it could produce paperclips
> infinitely

This is irrelevant, being turned off while it's still possible to produce more
paperclips would not be the optimal strategy for the machine trying to
maximize paperclip production.

> and it would already have an "off feature" which was designed for a purpose

Look up what the control problem is. We don't know how to design such a
feature for a General AI that also lets it do it's job effectively. It's not
as simple as it looks. If you're not talking about a General AI than sure,
it's easy, but non-General AI's are not very scary.

> We already replace humans at their jobs all the time and they only very
> rarely kill their masters to prevent it.

But humans don't have "maximize my production for this company" as their main
life objective. Things like food, not going to jail, not dying, etc usually
come first.

------
dahart
> financial markets are the first rogue AI.

I feel like I just got Rick-Rolled.

------
Iv
Sorry but it is hard to take seriously someone who thinks AI and automation
won't take away jobs.

They will and it is a good thing.

~~~
fiblye
Even assuming we end up with some society where robots do most of the labor
and universal basic income becomes a thing, what do you think 90% of people
will do? People need to be occupied, and the typical person isn't an inventor
or artist. Furthermore, good luck making kids want to go to school when they
see it as pointless since they can just get everything free from the robots
while they watch TV. If you think obesity and our sedentary lifestyles are a
danger now, it's only just starting.

We're in for some huge problems as automation ramps up. The people saying
"this is good" now will be screaming to go back once the problems start
rolling in.

~~~
marcosdumay
Well, I think you give people too little credit. Nobody can stand passively
watching TV the entire day every day of their lives.

But even if you aren't, why would it bother you? If other people want to waste
their lives on a coach in front of the TV, what makes this wrong?

~~~
sharemywin
Assume that the one thing you do best for a job AI/computers come out and do
better. How do you recover from that. Especially, if said AI owners decide you
only get a barely livable stipend while they keep most of the profits.

~~~
marcosdumay
Is it worthless (for you) doing just because a computer can do it better? If
so, you may need to look for a better hobby... because after computers start
doing everything better than we, all we do will be hobbies, not work.

------
hasbroslasher
Obviously the most controversial part of this is "capitalism is the first
rogue AI" point. I will try to stay out of having an opinion on this, I just
want to try to add colour to what he's saying.

First, in the classical "market" scenario, we're talking about little atomic
firms, each with goals and hopes and dreams of profit. Each of these firms has
some kind of "knowledge" and some kind of decision making apparatus. They all
have some functionality. In a sense, these firms are like people - so much
that our government treats them as such in many cases!

In many cases, a lot of these ideas and processes end up being the same, and
when it is, we call it collective wisdom[0]or collective intelligence[1]. I
won't go too deep into that.

So while financial markets and the firms that comprise them aren't exactly
_machines_, they do display a form of intelligence different and sometimes
more effective than our individual knowledge.

[0]
[https://en.wikipedia.org/wiki/collective_wisdom](https://en.wikipedia.org/wiki/collective_wisdom)
[1]
[https://en.wikipedia.org/wiki/collective_intelligence](https://en.wikipedia.org/wiki/collective_intelligence)

------
freedomben
> when the curtain rolls back we see that those superpowers have consequences:
> Those algorithms have bias built in.

> That’s absolutely right. But I’m optimistic because we’re having a
> conversation about biased algorithms. We had plenty of bias before but we
> couldn’t see it.

I must say I am really happy to see that bias in tech is being recognized and
accounted for (for the most part).

Please forgive the politics (I'll try really hard not to bash Trump ;-) ), but
if there is a silver lining to the 2016 US presidential election I think it is
that it has really caused many of us to introspect and realize how thick our
echo chamber walls have really gotten over the last few years. The chamber was
constructed so quickly, I barely realized it was happening. We're becoming so
polarized that we're actually moving to different communities to be with more
people "like us." Simple awareness of the problem is a huge step forward in
being able to resolve it.

------
grandalf
It's 2017 and humans have built several anemonae, and we have several highly
profitable firms that are built around these anemonae.

Their employees act as the stewards (clownfish) more as beings in symbiosis
than as one controlling the other. We can neither understand or pull the plug
on these creatures. Time will tell which species evolves more rapidly.

------
mathgenius
> financial markets are the first rogue AI

How about this idea: the first rogue AI was _language_. In terms of AI as a
compositional system for storing meaning, I think this might be a reasonable
position to take. Yes people, we've been playing this game for a very long
time...

~~~
projektir
That sounds a lot like McLuhan's ideas.

~~~
mathgenius
Who is McLuhan? Do you have a reference for this?

~~~
grzm
[https://en.wikipedia.org/wiki/Marshall_McLuhan](https://en.wikipedia.org/wiki/Marshall_McLuhan)

------
sukilot
[https://www.amazon.com/Weapons-Math-Destruction-Increases-
In...](https://www.amazon.com/Weapons-Math-Destruction-Increases-
Inequality/dp/0553418815)

------
bitL
Seems like the only job available in a not-so-distant future will be AI-
correcting engineer...

------
vpribish
I've flipped the bozo bit on the term 'Algorithm'.

------
quotemstr
Capitalism is the best thing that's ever happened to humanity. The author's
ideological bias gets in the way of understanding.

~~~
cujic9
It has certainly enabled more humans to exist. It has created way better
conditions for many of those humans. And it has created unimaginable suffering
for just as many humans.

I wouldn't get rid of capitalism, but I certainly wouldn't give it a
superlative. It's kind of like saying that domestication is the best thing
that has happened to chickens, cause look how many more of them there are, and
wow some of them live on a free range.

------
jefurii
> For more than two decades, Tim O’Reilly has been the conscience of the tech
> industry. ...he was among the first to perceive both the societal and
> commercial value of the internet [and] ... he drew upon his education in the
> classics to apply a moral yardstick to what was happening in tech.

Writer hasn't heard of Richard Stallman?

Update: writer is Steven Levy, who most definitely _has_ heard of Stallman,
and should know better.

