
A Google Brain engineer’s guide to entering AI - allenleein
https://80000hours.org/articles/ml-engineering-career-transition-guide/
======
acconrad
I asked this a couple of years ago here and I feel like I keep have to asking:
is this our industry's way of trying to tell us something about where our jobs
are headed?

I can't help but get the sense that we're trying to tell ourselves "evolve or
die" and that the web jobs will move towards AI/ML if you want to stay
employed in the 10-20 year range.

~~~
danaur
I don't really understand the sentiment that the web jobs are going away.
Web/mobile and supporting technology has only risen in relevance. ML has some
narrow use cases where it can solve some interesting problems or eek out some
additional efficiency, but I don't see programming move towards it.

~~~
snodnipper
as a mobile dev, I have been considering this with some UX folks. Few things
going for AI/ML - conversation interfaces should be far cheaper/quicker to
produce in comparison to traditional app development. Furthermore, talking and
touch are natural ways to interact (e.g. pointing on a map whilst talking) vs
endless long presses and menu items.

Whilst I have been involved in Android since the beginning I personally would
not encourage younger folks to focus too much at the expense of new
technologies (inc. blockchain tech for that matter).

That said, we hopefully have another 10 years!

~~~
mattlondon
Voice and speech stuff isn't going to work in today's work environments. Can
you imagine an open office full of people jabbering away to their computers? I
guess voice printing will to some extent prevent other people "highjacking"
your computer while trying to control theirs, but the noise would be extreme.

For certain industries - e.g. tech - people like quiet to concentrate. Not
unique to tech of course... libraries are quiet too for the same reason. I
cant imagine the average office where concentration is required benefiting
from a call-centre-esque environment where everyone is talking out loud.

.. but I guess it might mean we can get away from open offices and back to
private offices?! :-)

~~~
sdenton4
One could imagine responding to subsonic commands, though, with a microphone
in contact with the throat... I can't imagine using it for programming, but
could be cool for quick navigation.

~~~
antidesitter
Perhaps you can skip the vocal system entirely and go straight to EEG
(assuming we manage to improve the signal-to-noise ratio).

------
RA_Fisher
I'm surprised there's no mention of statistics considering it's the basis of
machine learning.

~~~
cgearhart
When software engineers do statistics we call it "machine learning", when
statisticians do software engineering we call it "data science". ;-)

~~~
RA_Fisher
Heh, statisticians invented machine learning (Breiman, Hastie, Tibshirani,
etc.).

------
option_greek
I'm curious about the emphasis on safety. I don't think anyone claims they are
closer to achieving AGI. And yet there are already jobs for AGI safety ? :)

~~~
rossnordby
'Safe' AGI is a harder problem than 'any' AGI, so getting started early could
be helpful. Especially considering how bad many of the AGIs in the 'any'
category could be. Worst case, we don't get a mulligan.

~~~
AndrewKemendo
Safe AGI is a pipedream. It's either narrow AI and mostly existentially safe
or it's General AI and clearly existentially risky.

The FHI/SIAI/MIRI people have been spinning their wheels for over a decade on
this and made zero progress.

~~~
rossnordby
I wouldn't exactly consider myself optimistic about safe general AI either,
but there are extremely strong incentives in the direction of developing
greater generality. Not sure we have the luxury of not having some form of AGI
in our future.

The current safety progress certainly isn't ideal (where ideal would be "oh
hey we figured it out, boy that was easy"), but it is roughly in line with
what I'd expect for a smallish new field trying to lay foundations. No one
knows what to do; there may not be any other option than to flounder for a
while.

~~~
AndrewKemendo
It's not that safety progress isn't ideal, it's that as a problem set it's
fundamentally in conflict with the concept of AGI, so intractable.

It's akin to trying to come up with a firearm that can't kill innocent people.
The problem isn't even coherent.

The only solutions are:

1: Redefine what AGI means, like openai has done.

2: Prevent AGI altogether

~~~
morgancmartin
I would say the trick is making the problem statement coherent. Sure, coming
up with a firearm that can't kill innocent people is impossible because the
innocence of someone is subjective. So instead, you can choose to approximate
a solution and then iterate until you have a solution that is technically
imperfect but so close to your initial goal that the difference is negligible.

For instance, in your firearm example, we can't determine who is innocent and
who isn't. But we can use embed a facial recognition device into the firearm
that then cross references whoever the firearm is pointed at against a
database of known non-innocents.

And then exclude any targets obviously under 18.

And then make the model probabilistic and add greater weight to targets that
appear to be carrying weapons of their own or that are acting in an obviously
dangerous manner.

You get the point. After enough time, you would have a firearm that is
(arguably) better off than the one you started off with and so closely
approximates the goal of your initial problem description that you don't care
to make a distinction.

I could be wrong, but I imagine this is how AGI safety researchers think about
their work to some degree.

~~~
AndrewKemendo
Decision tree based processes like you outline are not only naiive (It does
not operate with complete observability) and it's not resilient (It is
susceptible to workarounds).

That explicit reasoning approach falls squarely into GOFAI, symbolic
reasoning, expert systems etc... so we're well past that at this point and
know the problems with it.

Anyway, again, it's not a tractable problem. Something smarter than you is
going to be able to eventually beat whatever restrictions you put on it. Might
as well just be comfortable with it.

~~~
antidesitter
> That explicit reasoning approach falls squarely into GOFAI, symbolic
> reasoning, expert systems etc... so we're well past that at this point and
> know the problems with it.

The symbolic reasoning you're talking about is going to make a comeback
through probabilistic inductive programming, so don't dismiss it. It is
necessary to attain sample efficiency and generalizability, which is virtually
impossible with purely statistical approaches. I recommend you look into Josh
Tenenbaum's work:
[http://science.sciencemag.org/content/331/6022/1279](http://science.sciencemag.org/content/331/6022/1279).

~~~
AndrewKemendo
I wasn't clear with that statement. As clarification, I'm not saying that
GOFAI approaches aren't going to get better or be more relevant.

Rather, the AI community has really good confidence that you can "fool" every
AI approach to date. That has been true the longest for GOFAI approaches, and
now we're doing it for DL/RL.

------
yters
What is the big job market for AI/ML? I see some Kaggle competitions, and a
few high wage jobs working for the big tech companies, but it still seems
pretty niche.

Makes sense from a theoretical viewpoint, b/c all AI/ML models are not Turing
complete in order to be optimizable, whereas your average programmer easily
churns out "models" in a Turing complete language. So, joe blow programmer is
universally more powerful than any AI/ML algorithm. He can program an AI in
his TC language, but AI cannot replicate his work in a non TC language.

~~~
Ftuuky
The bank where I work needs so many data scientists and AI/ML/DL engineers and
they can't find enough of them to the point they're paying master degrees and
MOOCs to all employees that want to study it, as long as they already have
some experience with Python/R or a background in math/stats/engineering.

~~~
yters
PhD in Computer Engineering with MSc in AI/ML here. Happy to do remote work
for your bank :)

~~~
Ftuuky
Working remote is not allowed due to certain regulations. That's why it's so
hard to hire good data scientists!

~~~
irregular-john
And you're in top-20 I-banking? So most likely in a huge city, with a huge
swath of data science talent. The issue isn't folks not being able to work
remotely.

~~~
Ftuuky
The thing is that data scientists are wanted by so many companies that besides
salary you have to compete with perks and other things. IBs can't provide some
of the perks, like remote work and makes it less attractive to potential
candidates. This is my opinion, not my employer.

------
aligshow
neither google nor stanford has the authority to unilaterally declare a limit
to research

> Google, OpenAI, Facebook, Uber, or Microsoft.

none of these are academic institutions and none of them have authority to
define a limit to research

