

What Do You Think about Machines That Think? - privong
http://edge.org/response-detail/26249

======
TheCraiggers
My first thought was to make a hard rule that it must obey the laws in the
location it's in. In the US, that would be federal, state, and the various
local laws of your city or township or whatever.

And then I laughed at the mental image of a robot stuck in a deadlock due to
all the mutually exclusive laws we have. (Of course, in Hollywood, replace
"deadlock" with "exterminating all humans".)

~~~
thirdtruck
Even attempting to implement such a rule could have benefits. We could end up
with formalized proofs of how those are are objectively contradictory, which
in turn could be factored into the design of future laws as a preventative
measure. Imagine "test-driven legislation," if you will.

~~~
pnathan
IANAL, but you'll find that laws tend to be nuanced and written with human
judgment in mind. Such efforts are at cross purposes with formalization into a
software system.

~~~
thirdtruck
Closing that gap is the point of the exercise. :)

------
swframe
We will build very smart machines before we build machines that think. Those
very smart machines will probably change the world the way the internet did.
Imagine having smart machines than can farm, build houses, clean up pollution,
etc. Thinking machines are too far off to know what to expect. We need to
first consider what an increasingly automated world would mean. In particular,
what will it mean for employment and average income?

------
humanfromearth
We'll figure something out when we get there.

What I found quite annoying lately (more than usual) is that there is a lot of
"what if machines, terminator, bla bla bla death! destruction!".

I'm not sure if it's the recent developments with the deep neural nets, maybe
it is.. Maybe it's the high profile neo-luddites like Hawkins or Musk. I wish
I could have a megaphone and shout: AI that would be classified as having
comparable human intelligence is way too far and there are a lot of pieces
missing. Just because we can identify with 95.4% accuracy some MNIST dataset
or 200 species of dogs, or some guys playing baseball, it's still a long way
to getting to a working AGI.

But instead on focusing on doing the actual work that would get us to these
machine 'thugs' or at least promoting more research in that direction most
prefer to sit in their armchair and argue to the death the hypothetical
T-1000.

Thankfully there are smart people out there doing real work and getting
results so that is reassuring.

My problem with these extremists is that they produce hype. Hype that got us
the AI research winter last time.

~~~
Houshalter
Musk, Hawking, et al. are not talking about deep neural nets that do well on
MNIST, but what will be possible 20-50 years down the road. There are very
real concerns about AI and it's annoying to see them just dismissed by calling
people "neo-luddites".

>But instead on focusing on doing the actual work that would get us to these
machine 'thugs' or at least promoting more research in that direction most
prefer to sit in their armchair and argue to the death the hypothetical T-1000

There has been more investment in AI in the last 5 years than in the 50 years
before it. Even Musk invested millions to AI research.

~~~
mcguire
Spending time and effort on AI "20-50 years down the road" is a waste because:

(a) statistically based, but obviously not (or not obviously) intelligent
systems are going to change the legal and practical landscape well before
Ultron gets up and starts walking around, and

(b) the kinds of things they're worried about are almost certainly going to
appear silly when the real thing does roll around.

The moral responsibility for a bot that orders drugs certainly belongs to the
person who set up the bot to order from the "black market"; for a drone that
kills the wrong people, it belongs primarily to the officer that ordered the
mission knowing that the statistical software wasn't perfect. In my opinion.
The supreme court may differ.

Also in my opinion, the dangers of "singularity" AI are overblown due to a
soft, fuzzy, but very real upper limit on "intelligence".

------
superobserver
The machine (software program) lacks sufficient awareness beyond mechanical
execution of the programmer's own will; therefore, it holds infinitesimal
responsibility for its actions. In order to prevent further violations of law,
it should be decommissioned. The programmer should be fined equivalent to the
amount of illegal purchases the bot made in addition to $1K for each offending
purchase.

As for other machines in the future (androids), it seems obvious enough to me
that - insofar as they are intelligent and thus responsible (IQ testing is
commonly used in US judicial system, with the cutoff being an IQ of 70 or
above) - they can be tried like any human being.

------
zackmorris
Culpability goes all the way to the top. So the top ranking official who
orders use of the machine is to blame, which could very well be the president.
And it goes deeper than that, because the society that elected the president
endorses its use if it doesn't recall him or her. Since recalls are rare,
collateral damage from thinking machines is basically inevitable and I don't
think most people have any idea what is coming.

That said, I expect that AI is going to transcend the way people solve
problems to such a degree that it may even alleviate the underlying causes of
war like artificial scarcity and ignorant intolerance.

------
pnathan
The article has nothing to do with thinking. But in the scope of the article,
it's an interesting legal conundrum. I expect the end result is that the class
of software which can perform illegal actions autonomously will have to be
brought under warranty and the creators held responsible.

"Acme Inc. Warrants that under Factory Conditions and Standard Programming,
Software Agent R2-D2 will not conduct illegal actions".

This is not out of the range of possibility and even feasibility today, but it
will require genuine engineering mentalities to create such systems.

------
avodonosov
The same way we deal with other entities: rats, viruses, hurricanes. If they
harm us, we don't "punish" them, we just seek a way to prevent them harming
us.

------
Houshalter
Why was the title changed to the Edge.org question of the year?

------
avodonosov
We can give them negative reward (decrement some counter in the machine
memory). The machine will want to avoid that!

------
known
Machines killing Machines is BETTER than Humans killing Humans

~~~
mcguire
Sure, but machines killing humans scales better than humans killing humans.

------
fein
What do machines think about machines that think?

------
avodonosov
And what they think about you?

------
cafebeen
Just follow Asimov's laws of robotics:

0) A robot may not harm humanity, or, by inaction, allow humanity to come to
harm.

1) A robot may not injure a human being or, through inaction, allow a human
being to come to harm.

2) A robot must obey the orders given it by human beings, except where such
orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not
conflict with the First or Second Law.

------
logn
If corporations are people, then robots are people too.

Corporations pretty much only understand monetary fines, and robots will only
understand some form of disconnecting the cord.

Similarly, corporations have a corporate veil to shield their owners form
liability, assuming their business finances are separated from personal. Robot
designers would have limited liability assuming their personal decision making
was separated from the algorithmic reasoning.

And we've managed to pass the CAN-SPAM Act, so either we should restore robot
1st Amendment rights, or otherwise overturn Citizens United.

~~~
icebraining
_If corporations are people, then robots are people too._

Corporations have legal personhood because they're an association of people. I
don't see how this applies to robots. If you do, I don't want to see your
robots!

~~~
logn
Corporations can be wholly owned by other corporations. And they often have an
obligation to seek to maximize profits above all other concerns. How's that an
association of people?

They're treated as people because there was a similar conundrum as to how to
apply laws to corporations. Justices decided to start applying laws as though
they're people. It's the same situation Schneier is describing with robots.

~~~
TheCraiggers
>Corporations can be wholly owned by other corporations. [...] How's that an
association of people?

If you follow that tread long enough, you will eventually find people running
the show. You will not find robots, sentient trees, or rocks sitting around a
board table.

~~~
jacquesm
Hm... now you have me wondering if I could slip a circular chain of ownership
past the chambers of commerce here.

I'm pretty sure you could bootstrap this:

Person 'X' incorporates company 'A' and receives the shares.

Company 'A' incorporates company 'B'.

Company 'B' makes some money.

Company 'B' buys the shares of company 'A' from person 'X'.

Presto, no more people involved, just two companies owning each other. It
might even be possible to have company 'A' buy the shares of company 'A'
directly of the person that incorporated it, so it would own itself.

You'd have to fiddle with the valuation during the transaction because you're
using the company capital to do the purchase, essentially ending with a
valuation (temporarily) of zero when the transaction has completed after which
the company would have to create some more income to become liquid again.

Tricky. Very much tempted to try this.

~~~
Retric
"You'd have to fiddle with the valuation during the transaction” Just have
company B use some leverage aka get a loan to buy A. That way it's valuation
does not need to change pre or post purchase.

PS: Wondering if this has happened somewhere such as Japan where they have
some very convoluted ownership diagrams.

~~~
jacquesm
Good point!

I wonder how you'd go about the usual annual shareholder meetings and things
like that. Without a real person to represent even a single one of the shares
it would be pretty hard to get a vote on anything. Board of directors would be
a bit problematic, hiring a CEO could be done but who would do the hiring?

~~~
Retric
This is not uncommon situation when some venture fund owns a large share in a
company they just appoint someone to sit on the board. The appointee need not
actually own shares they just represent shares, so company A sends Sam to be
on B's board and company B sends Ted to sit on A's board.

The funny thing taxes would just end up eating up any profit as B sends say
70% of their profit to A as a dividend which sends 70% of 70% to B as a
dividend and repeat.

~~~
jacquesm
You don't have to pay out dividends, and even if you did in 100% ownership
situations dividends between companies are not taxed at all (at least not
here).

