
Sergey Brin warns of AI threats through a 'technology renaissance' - jonballant
https://www.theinquirer.net/inquirer/news/3031311/googles-sergey-brin-warns-of-ai-threats-through-a-technology-renaissance
======
legitster
The idea that AI will affect employment is very dubious to me:

\- This stuff was talked about all the time during the industrial revolution.
But it turns out that automation just increased worked productivity and made
people more valuable.

\- Since ATMs were introduced, the number of bank tellers in the US actually
went up.

\- Certain industries have actually become _de_ automated over time. As an
example: coffee. It used to be exclusively made by automated machines. Now
hand-making coffees is one of the largest businesses in America.

\- AI is only as smart as the data you feed it. Even Google's own search
rankings have to have human minders because people are too good at gaming
their algorithms.

\- AI is only good at the thing that human beings are very bad at - making
quick, analytics based decisions. Of course this is a threat to middle
management and executive functions. But for people who actually make things or
solve problems, there's just one more thing to make and solve problems about.

There are some very real concerns about organizations using AI to make
decisions about people without human oversight. Imagine an AI tasked with
college admissions, bad decisions without possibility of human intervention
could hurt people - are we sure it would be that much worse than our current
system, often tainted by bias, money, and racism?

My concern is that the leaders in the corporate tech world created huge
organizations by automating our lives, and now they want to act as gatekeepers
on the kinds of automation that is acceptable.

Edit:

To add to this - a companies stance on AI will probably follow their belief in
whether humans are underrated or overrated. Google designs products believing
humans are overrated - they think people will be victims of AI. Amazon thinks
people are underrated (based on Amazon Turk, customer service, warehouse
design) - ironically, they won't think humans will overall be harmed by AI.

~~~
noobhacker
The argument that AI will take all the job is commonly made, but I agree with
you that it's a poor argument.

Much more plausible and concerning, however, is that AI will stratify the
labor force into 3 groups: highly-paid engineers (who manage the AI), highly-
paid services job (i.e. doctors, lawyers, psychiatrist), and poorly-paid
services job (i.e. barista).

This stratification has already happened as our economy advances (you
mentioned baristas---in the past, those workers could have earned middle class
manufacturing wages). The wage distribution in the services sector nowadays
looks like a dumbbell [1].

Wage stratification can lead to political stratification as poor people don't
have the resources to fight for their rights. Political stratification can
lead to further wage stratification, continuing the cycle. That's why I
believe that it's important to deal with the AI threat, now.

[1] [https://seii.mit.edu/research/study/how-does-growth-in-
the-u...](https://seii.mit.edu/research/study/how-does-growth-in-the-u-s-
service-sector-impact-labor-polarization/)

~~~
root_axis
That seems like an arbitrary stratification. By the time AI is sophisticated
enough to have significant ripple-effects throughout the economy, the "highly
paid engineer" is going to be a tiny fraction of the labor force, akin to the
hedge fund manager of today. Also, why do I need to hire a team of engineers
when I can just spin up a few AWS AI instances.

~~~
noobhacker
That's entirely possible. I'm thinking more about the labor force of the next
40-50 years. In 100 years (or however long it takes for AI to improve itself),
engineers may also be obsolete.

------
austincheney
Not to worry just yet as AI still hasn't been realized.

What many developers call AI, or more precisely machine learning (ML), are a
collection of algorithms for making more intelligent decisions in context to
the data at hand. That is simply smarter programming, but it isn't
intelligence. The machine is still limited to the static instructions
available.

Real AI will allow machines to spawn original decisions to solve new problems
spontaneously. We aren't there yet. The ethical implications will likely not
be any different than those confronted by people. It generally boils down to
_just because you can do something doesn 't mean you should_.

~~~
ThomPete
A lion is still dangerous even tough it's not conscious.

 _Edited for clarification. Not conscious in any way that would rival "real
AI" (as per the parents point) or human consciousness._

AI will get increasing access to more and more crucial parts of society and
thus become more and more dangerous. It doesn't need to "real AI" for that.

~~~
majewsky
Since eyebrows are being raised over your considering lions unconscious, how
about the following amendment?

> An earthquake is dangerous even though it's not conscious.

~~~
ThomPete
I was responding to the parent who talked about real ai not sure why eyebrows
are being raised unless someone want to misunderstand the context the comment
were made in.

Of course a lion is conscious but it's not conscious at a level that the
parent means (real ai).

------
titzer
Until we invent (or accidentally stumble upon) an artificial mind with
volition and a sense of self, AI will continue to be an increasingly powerful,
increasingly self-directed set of optimizers. We'll continue to use AI in more
and more for the ultimate goal of, well, optimizing. Optimizing what exactly?
Energy efficiency. Check. Materials. Check. Drug discovery. Check. Financial
transactions. Business rules. Capacity planning. Cryptocurrencies. Stock
market. Ad placement and targeting. Predicting consumer behavior. Wait...

You see, it will all end up in the same place. AI is a way to _make money_. AI
is _just_ optimizing economics, but _optimizing economics on steroids_.
Eventually, the company with the best AI is going to have the best advantage
in all business interactions and planning. The first to master-level AI is
going to end up with essentially all the money in the end.

Don't you think the big players know this? Of course. AI will be employed in
exactly the same way as every other piece of tech in recent history: to make
obscene amounts of money.

Don't be surprised when making money screws consumers over. It's just
business. No hard feelings, right? Thankfully, the AI is incapable of feeling.

The ultimate AI is the one you can give one command to: _MAKE MONEY!_

------
boomskats
At least there isn't anyone out there designing and manufacturing military
grade weaponised drones and letting them autonomously decide whether to pull
the trigger.

Right?

~~~
ccccccccccccc
I really wish we could know what is going through CEOs heads when they make
statements like this while simultaneously contradicting themselves.

~~~
baxtr
Probably he’s been already replaced by a replicant

------
xefer
There needs to be term to describe these rail thin articles whose only real
affect is to provide an HN conversation-inspiring headline.

~~~
blauditore
Well, I'd say it's pretty close to clickbait. Also, it reminds a bit of "dog
whistling": [https://en.wikipedia.org/wiki/Dog-
whistle_politics](https://en.wikipedia.org/wiki/Dog-whistle_politics)

~~~
Robotbeat
Not very good click bait as it’s clear from the headline the HN comments will
be better than the article, so I usually skip the article and read the
comments. :)

------
freehunter
So a sort-of-on-topic question: for all the people who are warning of the
coming AI apocalypse and warning us to be careful with AI... how do you
enforce that? What’s the enforcement mechanism to ensure that the bad people
don’t get AI and that people don’t create bad AI? All these warnings, but what
can we do about it?

If strong AI is invented, especially if it’s open-sourced, the cat is out of
the bag and it’s game over, right? We’ve seen how good we are at preventing
rogue states from getting nukes. How can we keep them from getting life-ending
AI?

~~~
emerged
I think it's going to parallel the virus/anti-virus eternal struggle. The bad
AI will need to be targeted by the good. It'll be a mess, of course.

~~~
ericb
I think any breakaway AI would be a highlander situation in that it would
ensure there can be only one.

~~~
freehunter
Yeah, that's more of my point, you're right. If we're considering the endgame
of the AI singularity, only one AI can win. If there are multiple, they'll
each keep getting better, and they'll each keep warring against the other one
until only one is left (or none are left). That's the doomsday scenario for AI
when people warn that this is coming.

But out of all the people warning about it, I haven't yet seen anyone say
"this is how we prevent it".

~~~
ericb
I think the only way to win is to make sure the first one is perfectly
friendly, or to make sure we are integrated with the first one. I don't give
us great odds on either front.

~~~
freehunter
I'm trying to remember a book I read a while back, it was a sci-fi book where
in the beginning chapters a war was being fought between the US and China to
prevent China from developing a strong AI. The main character is killed but
the preserved his body and brought him back to life some years later in a
cyborg body, where he learns that the world is entirely controlled by a
friendly benevolent AI and everyone has it embedded in their skin. But the AI
goes rogue suddenly and the only people who survive are the luddites who
refused to join the AI and the people who were off-world at the time the AI
went rogue.

For the life of me I can't find the book even through months of Google
searching, but anyway my point is... a "friendly" AI can be hijacked. Or
decide to go rogue, since we can't begin to comprehend its thought processes.

\--edit: sorry I went off topic a bit... I don't want to be just another one
of those "AI will kill us all" kinds of people because, as my first comment
suggested, that's completely useless fearmongering unless someone has a way to
stop it. Kind of like saying "the supervolcano" or "solar flares" or something
else completely unpreventable will kill us all... unless there's something you
can do to stop it, life goes on until it doesn't. I'd be interested in hearing
how to stop it, otherwise I wish people would stop printing these articles.

~~~
ericb
I follow the point you're making. We do have options and perhaps if we were
worried enough, they'd be put into effect.

You could make AI research illegal. You could place massive R&D to human/brain
interfaces. You could remove or reduce the medical device testing restrictions
on the basis that this is an all-or-nothing bet for humanity.

------
aurelien
That is just wrong, and here is why.

We, the People are under the money pressure from that last 7000 years, this
slavery mode of Society was a type of model that bring us, after many trouble
to this point of our evolution.

Now that the robotics/cobotics/mechatronics sciences are on the point to reach
and join with Artificial Intelligence we, the People are close to be free from
the money Society model.

And that will be great, when 100% of goods and services will be product by the
machine, that will be good and great enough to offer us the way to type 1 in
the Kardashev Scale, we are at the time we will know an evolution type in our
civilization.

~~~
mmirate
Even assuming that labor becomes a post-scarce resource, what about capital
resources?

How would metals be allocated between current usage and robot-building?
Energy, between current usage and powering these robots?

------
darepublic
Open source image recognition has a ton of ethical problems. Now for a low
cost people can set up very sophisticated spying apparatus. I can't wait until
neural nets can decipher people moving their lips. You could read people's
conversations from a distance, automatically. Get ready to censor yourself all
the time

------
jacksmith21006
I have heard the narrative that 90% of people farmed 120 years ago and now
look at things. I just do not buy this is going to work this time.

I live in the US and at some point we will have to talk about UBI. But right
now we can not even have constructive conversations about simple things.

~~~
Robotbeat
UBI is not too different from a lot of the social programs (Social security,
Medicare, etc) that grew out of the turmoil of that transition, though. Just a
difference of degree, IMO.

Also, a jobs guarantee could do many of the same things (and similar to some
of FDR’s programs). We might do both.

~~~
jacksmith21006
Look at the massive fight about Obama care and that is really nothing compared
to UBI.

------
40four
Can we please not upvote articles from tabloid websites? The article is breif,
shallow and has a glaring typo in it. This is not journalism. They simply
reprinted some quotes. Zero research, zero value added to the conversation.
Zero editing. The topic is interesting, but I wish a more legitimate source
was linked. Why are we rewarding shoddy work with ad exposures?

------
jonballant
While its easy to dismiss the Sci-fi fantastical end of the world claims on
the dangers of AI, its important to at least address some of the realistic
externalities of this technology.

"Brin is showing more real-world concerns of AI-powered systems replacing
human jobs or being used to spread propaganda and fake news rather than rise
up and enslave humanity"

------
debt
AI is a threat? Seems more like tech companies leaking or selling personal
information is the real threat.

AI looks to be more of a distraction.

------
z3t4
Maybe we should start with solving OCR, before taking on artificial
intelligence. OK, we have self driving cars, but they can't actually read road
signs.

------
denzil_correa
Link to the Alphabet 2017 Founder's Letter :
[https://abc.xyz/investor/founders-
letters/2017/index.html](https://abc.xyz/investor/founders-
letters/2017/index.html)

------
makach
this is all fine, first generation of 'true' ai will probably be extremely
biased and too simple for practical use. even though we are automating and
digitizing more and more and making everything more streamlined the rules will
be as rigid as they have ever been. the human factor will come from
generations of ai iterations -- if we are lucky. to be rejected by a machine
will be brutal experiences when we all get this pushed in our faces. rich will
grow richer and middle class will disappear in favor of poverty across the
board.

~~~
tim333
>first generation of 'true' ai will probably be ... too simple for practical
use

AI tends to either not really work or go way beyond humans - look at the Go
programs that didn't work very well and then in a short period we have things
like alphazero that given the rules of Go or Chess surpasses all humans in a
few hours of playing itself. By the time they are passing Turing tests and
thinking in a human manner they are going to be scary smart in many ways.

------
xtiansimon
Human immiseration mitigated by AI—it’s the algorithm’s digital design?

------
xtiansimon
Human immiseration mitigated by AI—algorithmic design decisions?

------
csomar
After multiple months of research, FBI and cybercrime agents realized that the
perpetrators and the hackers running the servers were not real humans. It was
a decentralized consciousness running across multiple AWS instances. A
presidential order was issued to shutdown all of AWS data centers.

But it was too late. The decentralized super AI had already recruited and
payed humans through cryptocurrencies to run servers at home in order to
process the AI algorithms. It did even plan for a power outage scenario.
Several people bought electricity generators financed by the AI crypto
account.

Decentralization is a big problem for the US and Europe. It is very hard to
get the whole world onboard with fighting the super AI. By the time (couple
days) the UN held most of the world leaders in Geneva, the super AI had
already controlled a fund of over $500bn. It created a marketplace to recruit
people, hackers, military persons and businesses.

Chaos stormed big cities. Police officers were shot dead by professional
snippers. Fake news were rampant and nobody could believe, trust or understand
anyone else. Several infrastructure services were disturbed by massive DDOS
attacks. In the mean time, the AI was able to recruit a couple countries and
get them onboard for a huge attack on all other countries. There were no
negotiations, the AI calculated the probability of success of its massive
attack operation and went full retard.

