
The World Will Be Our Skinner Box - DyslexicAtheist
https://thefrailestthing.com/2018/11/19/the-world-is-our-skinner-box/
======
skymuse
I read some of google's patent mentioned in the text and thought it was
interesting. They basically want a connected network of devices that
communicate with each other and with google. Most of the devices follow the
following algorithm: watch -> think -> act -> report. What is interesting is
the 'report' phase of the algorithm.

I think there is a battle between two competing AI philosophies. One
philosophy thinks AI should be central: all devices need to talk to the
overmind for learning. The other philosophy thinks AI should be local: the
device should learn on its own, kind of like a biological agent. I think the
'biological agent' form of AI is interesting when it comes to privacy of
information. Only you and the device have the data.

The future of AI should be interesting with these two philosophies in mind.

~~~
zapita
There is a possible middle-ground between central and local AI: federated
learning. Its goal is to provide the best of both worlds. Here's a recent
paper: [https://ai.googleblog.com/2017/04/federated-learning-
collabo...](https://ai.googleblog.com/2017/04/federated-learning-
collaborative.html)

------
narrator
Kind of reminds me about how they say the highest level of automation of self-
driving cars will be no steering wheel. Who decided that that was higher on
the scale and a good thing? The "progress" defined here is complete control
over all actions of individuals in society by an AI god.

These scales by which progress toward the future is measured are a key
ingredient of social engineering. One task that Free Software Foundation type
organizations could take upon themselves is to define a road map for future
progress of technology that recognizes freedom and human dignity.

~~~
mikestew
_The "progress" defined here is complete control over all actions of
individuals in society by an AI god._

Umm, no, the "progress" is machines that I don't have to micro-manage anymore.
Imagine an oven in your house that you had to constantly adjust to keep it at
the desired temperature. No, that would be stupid, we've solved that problem
decades ago. Now imagine a car that you had to control by hand with the
necessary micro adjustments needed to keep it between the lines. No, that
would be...well, it _will_ be stupid, and a solved problem, one of these days.

"Hey, oven, make your interior 425F so I can bake a pizza."

"Hey, car, drive me to Dusty Strings in Fremont. Oh, why am I talking to you?
You see it on my calendar, and you've been warmed up and ready waiting for
me."

I don't know what dystopian future you're imagining, but I don't see the AI
gods whisking me off to places I don't want to go. I see machines that
finally, FINALLY, do what I want without me having to speak baby talk to them.

~~~
ozmbie
"Hey, oven, make your interior 425F so I can bake a pizza”

Oven: “Sorry mike. I can only go to 400F unless you upgrade your ovenware plan
to $2.99 a month. Or use an ovenware family brand pizza! For every 3 ovenware
brand pizzas you cook, you get a chance to win a free pie! Oh by the way on
your trip to Dusty Springs tomorrow stop by Chad’s Bakery for their famous
pies.”

~~~
avmich
So logically after that a lobotomy of oven follows, with destruction of
electronics and putting basic functions under direct control?

------
mauld
This is scary, but it is important we recognize that we can be controlled and
have our behaviour modified by altering our environment - and that is can be a
good things.

Who hasn't delete Steam from their computer for a few weeks, thrown out junk
food or set up a good workspace in order to promote a behavior in ourselves we
want to cultivate, to be more like the person we want to be?

The important thing is that in those situations our environment remained under
our control and conscious, and we need to ensure our personal and digital
environment remain under our control, and its goals are transparent - and I am
not sure companies like Google do or will always agree.

~~~
core-questions
The key thing to keep in mind here is incentives and motivation.

You have a motivation toward self improvement, you make a change to your
environment, monitor it, and see what happens. You want to increase your
physical fitness, awareness, education, life position, finances, etc. and you
make corresponding changes where you and yours are the primary beneficiaries.

Google has a motivation to deliver value to their shareholders. They want to
deliver more, so they take advantage of their privileged position in our lives
and change our environment for us without asking, monitor it, and see what
happens. They have far more capability for analysis and, considering the
scale, far more financial motivation to make tiny adjustments that we would
pass over in our own optimizations.

Is it necessarily evil or bad for us? No, it doesn't have to be, but I have
very little faith in Google to make any choices that even border on altruistic
when it's far more profitable to exploit us. Let's remember that fundamentally
they are an advertising company, and that drives most of the money in the
business; advertising isn't just ads as we know them, but fundamentally the
business of changing your mind, suggesting things, and influencing your
behaviour to make you exchange some money for some service or good.

When you have a privileged position of influence, and you're willing to sell
it to literally anyone who signs up, there is clearly a lot of potential for
abuse.

~~~
AndrewKemendo
_No, it doesn 't have to be, but I have very little faith in Google to make
any choices that even border on altruistic when it's far more profitable to
exploit us._

I think there's a case to be made here that Google's motivations can mirror
those of the users and that it's in Google's best long term interest for that
to be the case.

It seems obvious to me that a company that can be aligned with their
customer's desires will have the most longevity, which may mean forgoing
temporary, short term revenue gains.

I'm not so naiive to think that it currently is the state of things, or that
it's simple for this to be the case - quite the opposite. Rather, my point is
that there is no law of economics that guarantees users and corporations must
be in conflict or that corporations will always do better financially by
exploiting users to the users' detriment (or that a competitor that does that
will beat one that doesn't).

~~~
TeMPOraL
> _Rather, my point is that there is no law of economics that guarantees users
> and corporations must be in conflict or that corporations will always do
> better financially by exploiting users to the users ' detriment (or that a
> competitor that does that will beat one that doesn't)._

But isn't there?

I think the core point of Meditations on Moloch[0] was that there kind of _is_
, that in the limit, competitive environments will always sacrifice every
value other than the one over which they're competing, and that an "AI god"
would probably be the only thing that's strong enough to break these chains.

\--

[0] - [http://slatestarcodex.com/2014/07/30/meditations-on-
moloch/](http://slatestarcodex.com/2014/07/30/meditations-on-moloch/)

~~~
AndrewKemendo
I mean nothing written here isn't already covered more thoroughly in
undergraduate economics curricula.

However those don't conflict with my point that there is nothing preventing a
firm and the sum of their users from having strictly aligned goals in the
context of their relationship.

Competition simply means that there could be other firms who may be able to
align their goals with users goals more tightly, and run the other firm out of
business. Much more to be probed here, like consistency, short and long term
goals and desire/goal uncertainty on the part of the user and firm. None of
those seem to be intractable issues.

------
elvinyung
Foucault, of course, talks about this. In _Discipline and Punish_ :

>But the Panopticon was also a laboratory; it could be used as a machine to
carry out experiments, to alter behaviour, to train or correct individuals. To
experiment with medicines and monitor their effects. To try out different
punishments on prisoners, according to their crimes and character, and to seek
the most effective ones. To teach different techniques simultaneously to the
workers, to decide which is the best. To try out pedagogical experiments – and
in particular to take up once again the well-debated problem of secluded
education, by using orphans ...

It's interesting to note that the argument made in this post is exactly what
Deleuze said two decades after Foucault [1], that the mechanisms of
"discipline" are moving outside of enclosed spaces because of the ever-longer
trails of data that we leave everywhere.

[1] [https://www.jstor.org/stable/778828](https://www.jstor.org/stable/778828)

------
api
The Borg may turn out to be prophetic. We could turn into a hive mind of
sorts.

I've already been calling Internet fads, echo chambers, and meme outbreaks
"Borgsongs." I seem to recall this term from STTNG for the omnipresent hive
signal experienced by Borg.

~~~
21
Imagine a neural network who could design memes/songs by directly measuring a
bunch of test subject brains. Kind of like a focus group.

A short story even asked the question, could you engineer a meme/song so
powerful that it could literally make you go insane? We know this stuff exists
(brainwashing, cults, ...), but can you super concentrate it?

~~~
api
That stuff is kind of like adversarial examples for human brains, and it
probably is possible to deeply study the brain and create super indoctrination
techniques.

At some point we are going to be forced to start regulating persuasion.

------
skybrian
If you hired a human assistant (or had an assistant at work), they would know
when you're working and generally what you're up to. It would make sense that
a digital assistant would need to know these things, too. It's inherently a
high-trust relationship.

Of course, the risks are different when you're trusting a device from a large
corporation to take on a role that a human would do.

I wouldn't buy one, but some people will decide it's worth it, and I don't
think they are losing their freedom by giving orders to a machine.

~~~
pacala
If I catch said human assistant reporting my daily activities to a third
party, they will be terminated with extreme prejudice.

~~~
skybrian
It's not a third party. Google/Amazon is providing the service.

~~~
nafey
What if you hire a maid for a day from a service, and you find out that they
have been keeping tabs on your dietary habits and daily routine and reporting
it back to their employer? That's closer to what Google is doing.

~~~
skybrian
The speaker is just the user interface. You've effectively hired a large
corporation to work in your house. Alexa doesn't work for Amazon, Alexa is
Amazon.

Google at least makes this explicit with "Ok, Google".

~~~
pacala
Exactly. You've hired a monstrously large entity to snoop on all your
activities. This is in no way equivalent to hiring a human person, which has
orders of magnitude smaller information and economic footprint. An analogy
that keeps things in proportion:

'You've effectively hired KGB/CIA to work in your house, the human assistant
is just the interface'.

~~~
skybrian
It's more like putting your money in a bank. Yes, anything you use the bank
for, the bank knows; you don't expect a bank teller to keep anything secret
from the rest of the bank. And yes, the government can find out your
transactions with a warrant. Still, people do keep their money in banks.

Banks are more regulated than Internet companies, though, so maybe there is
something to be done there.

------
ryanwaggoner
Something just occurred to me: it seems like a lot of the outrage over ads and
privacy is just intellectual elitism:

"Sure, _I_ am too aware of these techniques and mentally strong to ever be
influenced by marketing, but what about the rest of humanity? Google will
control their minds like a virus and soon destroy the world!"

------
mpr6
the tools of our tools

------
CharlesDodgson
despite all my rage I'm still just a rat in a cage.

~~~
RobertoG
You could do what all the clever rats do at night: plot how to conquer the
world.

------
jpm_sd
All Watched Over by Machines of Loving Grace

I like to think (and

the sooner the better!)

of a cybernetic meadow

where mammals and computers

live together in mutually

programming harmony

like pure water

touching clear sky.

I like to think

(right now, please!)

of a cybernetic forest

filled with pines and electronics

where deer stroll peacefully

past computers

as if they were flowers

with spinning blossoms.

I like to think

(it has to be!)

of a cybernetic ecology

where we are free of our labors

and joined back to nature,

returned to our mammal

brothers and sisters,

and all watched over

by machines of loving grace.

by Richard Brautigan (1967)

------
dranoel0226
I have nothing to hide and why wouldn't I want ads that better reflect my
preferences?

~~~
inherentFloyd
Why do you want ads at all?

~~~
21
Because the alternative is paying.

Do you want to pay for all the hundred sites you visit in a year?

~~~
dkersten
No, I would more carefully pick and choose which sites I care to visit since I
have to pay for them. I can, of course, pick and choose the sites now based on
how bad the advertisement is, but because advertisement is so prevalent, I
don’t really get a choice because most sites use advertisement. Ironically,
the sites I’d be most willing to pay for (like HN) are also the ones with few
or no adverts.

