
My story as a self-taught AI researcher - emilwallner
https://blog.floydhub.com/emils-story-as-a-self-taught-ai-researcher/
======
narenst
This is a really good time to be a Independent Scientist (aka Gentleman
scientist) in this field because how nascent deep learning and similar
techniques are. It requires a lot of trial and error and time/cost investment
to bring the AI techniques to the masses.

The FAANGs are trying to hire all the top talent (including Emil who wrote the
post) but I believe these independent researchers will be the one finding new
opportunities to make AI useful in the real world (like colorizing b&w photos,
create website code from mockups).

The biggest challenge I see for these folks is the access to high quality
data. There is a reason Google is releasing so many ML models in production
compared to smaller companies. Bridging the data gap requires effort from the
community to build high quality open source datasets for common applications.

~~~
woah
On the other hand, the lack of data for independent researchers may encourage
the development of low data techniques which is much more exciting in the long
term since humans are able to learn with much less data than required by most
machine learning techniques

~~~
TrainedMonkey
Arguably humans have a lifetime of data which was used to develop a model of
the world that is amazingly efficient at interpreting new data.

~~~
cygaril
Or our entire evolutionary history of data.

~~~
AlanSE
...which fits into a size of less than 700Mb compressed. Some of the most
exciting stories I've read recently for machine learning are cases where
learning is re-used between different problems. Strip off a few layers, do
minimal re-training and it learns a new problem, quickly. In the next decade,
I can easily see some unanticipated techniques blowing the lid off this field.

~~~
eanzenberg
I’m not sure our genetics encodes all the physics of being a person. A human
brain is so complex we’re not even close to simulating it on silicon

------
itsmefaz
The problem with Emil's approach to learning is that it restricts his ability
to learn anything that he has no intrinsic goal off. That includes areas like
pure mathematics, theoretical computer science, finance, economics,
literature, etc. Those subjects require a different sort of motivation than a
motivation to just achieve a set goal i.e capitalistic motivation

Also, Emil's approach to learning will create a flawed sense of expertise.
Look at how the article presents him as if he has a __deep-domain expertise
__which might not be true.

One important thing to consider is to look at the article more like content
marketing tactic, that FloydHub is using promote its brand which might not
serve well for engineers as it lacks some aspect of truth.

~~~
pingyong
>Those subjects require a different sort of motivation than a motivation

Is that really the case? Apart from the obvious "maybe that's what they're
intrinsically interested in" \- if you start with a problem, try to solve it,
and "pure mathematics" (whatever exactly that is supposed to be, anyway) is
required to arrive at a solution, it becomes part of the intrinsic motivation.
And if you keep solving problems without it the question that eventually comes
up is "is it really useful at that point?"

I do however agree that if you're looking at someone who's qualification is
primarily his "portfolio", you do actually need to check whether it includes
interesting problems, or at least projects that are related to what you need
them to do at your company. But if that is the case, I really don't see a
problem.

~~~
itsmefaz
That's correct. If one is solving a problem and its requires learning of
complex topics, then those topics become a part of the individual's intrinsic
motivation. However, the problem with his idea is that learning always
requires a goal and ones that do not have any have no intrinsic value. My
argument here is that learning doesn't have be attached to any goal and the
process of learning can purely be a stimulating activity and might not
necessarily have any capitalistic value attached to it.

Few ex:

Like, reading literature is a purely fun and mentally draining activity that
might/might not have any goal attached to it.

Like travelling, is a purely fun activity and might/might not have any
intrinsic goal attached to it.

I started learning Algebra out of the random, without any intrinsic goal.
Because, it was purely out of fun. Playing chess is an activity without any
intrinsic goal.

~~~
pingyong
>might not necessarily have any capitalistic value attached to it.

You seem to use goals and "capitalistic value" interchangeably, I would just
like to add that projects in people's portfolios are often not initially
created to make money, but simply to satisfy the creator's curiosity or
because they thought it would be neat.

~~~
itsmefaz
> I would just like to add that projects in people's portfolios are often not
> initially created to make money

We both can agree on this. But that's not how Emil looks at it, his entire
point is that projects are better credentials than even degrees.

And since projects have higher value in portfolios, they will implicitly
derive capital? One of the reasons why they have capitalistic value.

You and I can both agree, that projects and learning do have fun elements
attached to them and might not necessarily be part of a larger goal. However,
that's not what his views are!

~~~
pingyong
>However, that's not what his views are!

I'm not sure how you got to that conclusion. Certainly I don't see how

>projects are better credentials than even degrees.

would necessarily conflict with

>that projects and learning do have fun elements attached to them and might
not necessarily be part of a larger goal.

------
K0SM0S
This was a great read (and great nuggets, like that paper on Intelligence by
Chollet).

I wonder:

— Is math a problem for non-academic researchers?

Most papers strike me as requiring a non-trivial knowledge of linear algebra,
for instance; and topology sits right behind; the bold seem to take it one up
on category theory as we speak, and geometric algebra is quickly gaining
traction too. Lots of math, cool math but math nonetheless.

Not that you can't learn these on your own, but how big is the gap _in
practice_ , on the job, compared with actual PhDs in ML/math? (how much of a
hinderance, a problem it is for the self-taught researcher)

— "Contracting" in the field of AI sounds great but, how exactly? Especially
solo: what type of clients and how/where to find them, what type of 'business
proposition' as a freelancer do you offer, what's the pricing structure of
such gigs?

I mean, I can sell you websites and visuals and stuff, but AI? I know first-
hand most SMBs (IME the only real customers for freelancers) are a tough sell:
their datasets are tiny and demand scripting skills to sort out (extract
business value), not AI, so the value proposition is low for both parties;
it's still early adoption so 90% don't even consider spending 1 cent on "AI"
unless as a SaaS (they actually don't need to know if it's AI or programming).

I can imagine tons of fantastic research to do with SMBs, as partners or
'interested sponsors' (should they reap benefits on a low investment), but
really not much yet in the way of "freelancer products" to market and sell for
a living. I'm eagerly anticipating those days, but it's more like 2025-2030 as
I see it.

I would love to hear first hand takes on this.

~~~
deepnotderp
To be honest, linear algebra is not that difficult to learn on your own, and
plenty of people do. Gilbert Strang's course on OCW has made introductory
linear algebra quite accessible.

Things like topology (e.g. TDA, persistent homology, etc.) aren't really
mainstream yet, but even then most of it isn't really "hardcore" math in the
sense that you can get away with a basic understanding, e.g. what a Vietoris-
Rips complex is and why we use it instead of a Cech complex in TDA. Plus most
DL research nowadays is pretty (advanced) math-light. That being said, taking
the time to understand the math is absolutely worthwhile in my experience.

It should also be noted that a lot of real world ML/AI projects in industry
aren't really about brand new algorithms using advanced math, but rather more
about applying mostly existing techniques to messy, noisy real world data and
taking the time to understand the domain you are applying it to.

~~~
FranzFerdiNaN
I work as a data engineer, and i was interested in learning some of the stuff
our data scientists do so i can better communicate with them. Teaching myself
some statistics was fine, probability was fine too and quite fun and
surprising. Both subjects have plenty of books that allow you to understand
the intuition behind the things they do without having to dive deep into the
proofs. Linear algebra was and still is a struggle though. I've sampled many
books, from Strang's book to Linear Algebra Done Wrong/Right to some books
that are used in the local university in CS courses. But they are all the
same. It's clear they are all written by mathematicians for math students,
probably to be used as a way to teach students how to write proofs at the same
time? It's just one page after another of increasingly esoteric calculations
and proof after proof after proof. Which is fine if you study math, but bad
for me, because i dont want to work out the proof that taking the determinant
of an inverted matrix works, i want to know what it means and why one would
want to make the effort to do it.

Basically, i want a book like Statistical Rethinking or Blitzstein's
Introduction to Probaiblity, but for linear algebra. And i havent been able to
find it.

~~~
ivan_ah
I think you might like my book, the _No Bullshit Guide to Linear Algebra_. It
doesn't focus on the proofs, and instead gives lots of intuition and
applications. You can check the reviews on amazon, and here is a link to a PDF
with a few sample chapters:
[https://minireference.com/static/excerpts/noBSguide2LA_previ...](https://minireference.com/static/excerpts/noBSguide2LA_preview.pdf)

There are also some video tutorials on the first chapters here:
[https://github.com/minireference/noBSLAnotebooks#no-
bullshit...](https://github.com/minireference/noBSLAnotebooks#no-bullshit-
guide-to-linear-algebra-notebooks) (lots of hands-on examples using the
computer algebra system SymPy)

I won't lie to you and tell you linear algebra is "easy" by any means—there
are a lot of things to pick up, so it takes some time, but it is totally worth
it since LA is like the swiss-army knife of science: lots of features and
super useful.

------
newswasboring
I'm not going to lie, his life story made me jealous. Extremely jealous. He
did all the things I wanted to do (speaking in categories, not exact things)
and is free to do more. It seems like in some cultures (mine is South Asian)
there is a threshold on exploration time. Usually around the age of 28-30
years old (for some even lower than that, I consider myself one of the most
fortunate ones). As I approach that number I feel the invisible hand of
expectations and responsibilities crushing my spirit. But I must also remember
comparisons on life scales don't really work and nobody can win neither the
happiness Olympics nor the misery Olympics.

~~~
rsp1984
I'm not going to lie, his life story was written and presented with the
precise goal in mind to make people feel this way. I stopped reading after
half the article because I found it way too self-promotional.

So don't feel bad about your life just because someone on the internet
pretends to have a more interesting one. Those people are usually just
attention seekers and for some reason need the outside validation to feel good
about their achievements. And remember that not needing that validation can be
a strength too!

~~~
newswasboring
I don't want to dismiss somebody else's life experience because it looks too
fancy. I have nothing against honest self promotion, it's someone's story and
I clicked on the article after reading the headline. I have no indication that
this person is pretending or faking it. Also, even without all the quirky
things, there are plenty of things to be jealous of. Like all the different
things he tried his hands at, the freedom to just up and leave a pretty
successful thing.

I realize that these articles suffer from the connecting the dots thing, where
people make connection in the present which they would never have in the past.
But that is besides my point, even if he failed at all those things, I am
still jealous he had the chance to try all these things.

~~~
K0SM0S
May we go on a short tangent about a word and its associated feeling?

I submit to you that you don't sound _jealous_ , because it really doesn't fit
the rest of what you express; I suggest that you are maybe "envious¹" in the
sense that jealousy means envy + depriving the other of what they have ("it
should be me and not them", it's a matter of exclusivity, like being jealous
of #1 if you finished second, or jealous of the one dating someone you love,
or whoever took your job/offer). You don't sound hostile to them, merely
wishing more for yourself (which in itself is a positive feeling?)

I really don't know what the word you meant to use in your own language
actually meant (I'm French personally, so English is just our medium
translation layer). But I'm curious, culturally you know, about these
nuances².

I'd love it if you could just introspect that feeling a bit and share what it
really feels like, the complex emotion and what it "touches" in you (does it
bring despair, or motivation, or resolve, etc). [if it's too personal, I got
email, just ask]

Personally, I can feel "aspiring to" or "inspiration from" people who achieved
more than me — I don't want to deprive them of anything, I don't wish they
failed, nor do I wish to belittle their accomplishments; however I'd love to
eat everything they know, steal like the greatest of artists (the good ones
merely copy!), that is at best become _friends_ with these people and let them
_influence_ me (the true deeper meaning of that book³, if you ask me). And I
know, somehow deep down, that the more I'd be rooting _for_ these friends,
helping them go further, the more I'd be moving forward/up as well, taken in
by the positive storm.

[1]: _envy_ is _“a feeling of discontented or resentful longing aroused by
someone else 's possessions, qualities, or luck.”_

[2]: Part of my (personal) research on human nature, motivations, "what makes
us do what we do".

[3]: How to win friends and influence people, by Dale Carnegie

~~~
newswasboring
Honestly speaking, as english is not my first language, I confused the two
words. I thought envious means to want to deprive others of what they have. I
guess then I may have confused some people, I am envious but I think it
doesn't detract from my overall point. The word in my language would literally
translate as desire but it's more than that, it would mean more like I would
like to have that and it feels bad that I don't even have the opportunity to
have it.

I would definitely say this lack of even the opportunity to do this makes me
feel despair. I can try as hard as I want, but some things are just out of my
control, some truths about my life were written even before I was born. I
honestly cannot muster up the strength to derive inspiration or motivation
from these, because those things are relevant for things which are possible.

~~~
K0SM0S
Rest assured that jalousy and envy are actually often confused my many people,
even in one's own language. ;-)

> The word in my language would literally translate as desire but it's more
> than that, it would mean more like I would like to have that and it feels
> bad that I don't even have the opportunity to have it.

Yeah, OK, I get it. That's a very good word (the one in your language). I
think it's a rather universal feeling, this "invisible ceiling". Many people
feel that for various reasons.

There's a certain school of thought, somewhere between philosophy and
spirituality, that speaks of "abundance", and beyond (or perhaps before, on
the way) of "inner peace" or "inner happiness". The oldest forms I know are
Stoicism (western cultures) and Zen (eastern cultures), and you'll find it
nowadays in e.g. Tony Robbins, that kind of field. I think there's truth in it
that just works, at least it did for me (took me about 35 years to figure it
out though, as it's just totally outside the realm of "education" nowadays¹,
unfortunately IMHO).

One mechanism that I've always found to be true, is that _from the depth of
our biggest despair comes our symmetrical potential for joy_ , and vice-versa.
It takes knowing how good/bad it gets to really feel how worse/better it _is_
, or rather _goes_.

It's certainly trying on one side, but invaluably rewarding on the other.

[1]: at least in most of the western world, afaik.

------
octokatt
Was anyone else really put off by the congratulatory tone of the article, and
the #Quirks list on the resume?: [https://github.com/emilwallner/Emil-Wallner-
LinkedIn-Resume#...](https://github.com/emilwallner/Emil-Wallner-LinkedIn-
Resume#quirks)

~~~
TrackerFF
You know, they (read: recruiters) say that if you don't have a normal
background, you should have an interesting one. For some reason, if you don't
have the same cookie-cutter background as everyone else, you need to have some
amazing and convincing story to tell.

I think it's good that companies are willing to look into non-trad candidates,
that may not have found their "calling", so to speak, until their late 20's /
30's or whatever. But it does start to sound contrived when a bunch of 'em
have the same type of alternate-route stories, which involves traveling to
Africa / India / SE Asia to help out kids, create some startup aimed at
climate / poverty / equality / etc. I guess it makes you sound passionate and
legit - no-one can say that you wasted your time on chasing those things.

~~~
octokatt
I guess it can help sound passionate, but the list together doesn't sound
interesting -- it sounds like an AI read a bunch of minimalism lifestyle blogs
and output "interesting_backstory.txt".

Nothing on the quirks list is actually a quirk. They're interesting things
he's done that other people wrote books about, received praise for, and then
he followed their newer, well-traveled path.

It's not a non-traditional background. He's not a refugee who managed to learn
coding. He's not volunteering at a needle exchange clinic. I think that's
what's bothering me; he's pretending to be interesting, and taking the room
which could be going to someone else.

Thank you for helping me get to why something felt off. Appreciated, internet
stranger.

------
qntty
"Many are realizing that education is a zero-sum credential game."

Can this silly meme die already? Maybe it's understandable coming from an
economist who values education for no other reason than it's economic effects,
but it's strange coming from someone who clearly understands the value of
personal development.

~~~
codebolt
My prediction is that whoever comes up with the next forward leap in AI will
be someone who at minimum has a firm grasp on the various branches of
undergraduate level maths. Naively tinkering with heuristic statistical ML
methods like neural nets and hoping that higher level intelligence somehow
magically pops out isn't the way forward. We need a more sophisticated
approach.

~~~
tprice7
This is logically independent from any claim about the value of formal
education. I speak from experience that an undergraduate degree is not
necessary in order to gain a firm grasp of undergraduate level math. Happy to
elaborate if that is desired.

~~~
codebolt
I'm sure it's possible to learn on your own, but I think most people would
benefit from taking a few years of their lives to dedicate to learning
surrounded by a community of teachers and like-minded classmates. Learning on
your own requires a lot of discipline and dealing with solitude.

~~~
YeGoblynQueenne
The OP is highlighting maths because deep learning in particular makes use of
some light calculus and linear algebra, and the OP is probably mixing together
AI, machine learning and Deep learning (as is common today, unfortunately, and
I can't blame the OP for that, everyone's doing it).

However, there is a lot more to AI than high-school maths and I don't just
meean -more maths. I mean knowledge, lore if you like. It's a field with a
long history, stretching back to the 1930's even (before it was actually named
as "AI" in Dartmouth, in the 1950's). A lot of very capable people have worked
on AI for a very long time and have actually advanced their respective sub-
fields each with leaps and bounds and it's not very sensible to expect new
leaps while being completely clueless of what has been achived before. You
can't stand on the shoulders of giants if you don't know that there are giants
and that they have shoulders you can stand on.

Unfortunately, most people who enter the field today know nothing of all that,
or even that there was an "all that" before 2012 (if they even know what
happened in 2012; and to be honest, one wouldn't understand what 2012 means if
one doesn't know what came before). So on the one hand they are not capable of
making leaps and on the other hand they don't even know what a leap would look
like. And probably think that a "leap" is a 10% improvement of the state of
the art for a standard classification benchmark.

I agree with you though that what is needed to make leaps in AI is curiosity.
Lots and lots of curiosity. Vast amounts of curiosity. Curiosity of the kind
that you only find in people who are a bit zbouked in the head. Or just people
who have a lot of time in their hands, to study whatever their fancy tells
them to.

So- not the kind of person who flashcards The Deep Learning Book, if nothing
else because that means the person doesn't have the time to, you know,
actually read the damn book well enough to grokk it.

I mean seriously, what the fuck is it with the bloody flashcards?

------
exdsq
Survivorship bias or reality:

3 months learning FastAI, 3-12 months personal projects and consulting, 2
months flashcards of ~100 papers, 6 months to publish a paper

What does he mean by ‘paper’? A Medium post? NeurIPS?

~~~
itsmefaz
Submitting papers in conferences rather than journals.

~~~
exdsq
Without submitting from a known university or research group? Seems unlikely
to me. I have no doubt it can and has been done, but for it to be a regular
thing such that one can recommend it in a 'Couch to 5k' style method? No way.

I have some friends in Oxford who are DPhil/Postdocs in highly reputable
research departments specializing in ML and if they sometimes struggle to get
more than a poster session at the leading conferences, with the addition of
well known professors names attached, then there's just no way I can believe
Joe Bloggs who just learnt python 12 months ago is able to do the same.

I almost want to follow his guide just to check.

~~~
itsmefaz
The idea is to choose conferences like InfoQ where application type research
is accepted. Like Build-X-using-TensorFlow, it doesn't have to align with
standard research which requires formal education.

One also needs to be skeptical while reading such a PR post and not get swayed
away by the hero's journey in it.

~~~
exdsq
I was not aware of InfoQ - that looks pretty cool actually. But yes, this
makes a lot more sense. Thanks.

------
bluetwo
The thing that disappoints me about the aspirations of being a researcher is
that the goal is to get paid to study AI, not solve real-world problems.

I would rather build a small company by solving a real problem than work for a
big company spinning my wheels.

~~~
currymj
for a lot of people who end up in research-type jobs, a sense of curiosity is
one of their strongest motivators, and they want work that will let them
pursue their curiosity. it sounds like you're motivated by something else.

~~~
bluetwo
Well yes, I am very curious, just more motivated to solve problems.

------
ineedasername
I think in these sorts of discussions two concepts with the same name tend to
get conflated, so I think it's important to make a distinction between:

1) _AI Research_ as applying/tweaking known ML/DL methods to a novel problem.
I would term these something like "AI Engineering Research"

2) _AI Research_ as examining the theoretical frameworks & approaches to ML/DL
in a way that may itself lead to shifts in the understanding of ML/DL as a
whole and/or develop fundamentally new tools for the purpose of #1. What might
be termed "basic" or "pure" research.

I'm not placing one of these above the other in terms of importance. They are
both necessary, and they form a virtuous feedback loop between the two that,
one without the other, would see the other wither on the vine.

In the example of this particular person, Emil Wallner, he appears to be doing
#1, and perhaps doing so in a way that might help inform more of #2.

~~~
JamesBarney
I'm having trouble differentiating 1 from 2. Some seem obvious. Discovering
deep learning is #2, labeling some data, throwing it at an algorithm after
tuning a few hyper parameters sounds like #1.

But in my mind there is also a lot of overlap. Mind providing some concrete
examples? For instance what is discovering "transfer learning", "pre-training
with self-supervised learning", or "building PyTorch"?

~~~
ineedasername
>in my mind there is also a lot of overlap.

Yep! There can be. But if you want concrete examples, I used Xgboost to
identify people within a population at risk for an adverse event. This is
strictly #1. If I optimized Xgboost code to make it faster, that's also
probably firmly #1. If I improved Xgboost with a better understanding of
gradient boosting to provide more accurate results, that's probably a firm
case of overlap. When Leo Breiman [0] did his work that led to gradient
boosting and tools like Xgboost, that was firmly #2.

[0]
[https://en.wikipedia.org/wiki/Leo_Breiman](https://en.wikipedia.org/wiki/Leo_Breiman)

~~~
JamesBarney
Thanks!

------
rmah
Is this guy actually a _researcher_ in the way most people would think of it?
That is, someone who pushes the boundaries of science; who develops new AI
techniques or finds the hard boundaries of existing AI techniques; who finds
new ways compose multiple AI techniques cohesively; who explores the
theoretical foundations of AI.

Or is he someone who uses AI techniques to solve problems (and then wrote a
paper about it)? I can't help but wonder a bit.

~~~
ssivark
For better or worse, the definition of researcher has morphed into a
combination of

1\. Solves previously unsolved problems

2\. Publishes papers sharing those solutions

without regard to the kind/spirit/scope of problems solved.

Since conference publications don’t have the same number constraints as
journal papers, and are accepting of application-specific results, this
explosion of what is considered “research” is somewhat inevitable. Also, there
are a lot of people chasing this given the prestige associated with the title.

------
wigl
This reeks of survivorship bias to me. I much prefer Andreas Madsen's more
sober and self-conscious take on independent research [0].

> I’d spend 1-2 months completing Fast.ai course V3, and spend another 4-5
> months completing personal projects or participating in machine learning
> competitions... After six months, I’d recommend doing an internship. Then
> you’ll be ready to take a job in industry or do consulting to self-fund your
> research.

Where are these internships that will hire you based on your completion of
Fast.ai (if done in 1-2 months by a beginner I assume it's only part 1) alone,
especially in 2020? How many are going to place in a Kaggle competition with
just half a year of experience? More importantly, just how many people are
privileged/secure enough to put their all into learning, with no sense of
security or peer support?

> I started working with Google because I reproduced an ML paper, wrote a blog
> post about it, and promoted it. Google’s brand department was looking for
> case studies of their products, TensorFlow in this case. They made a video
> about my project. Someone at Google saw the video, though my skill set could
> be useful, and pinged me on Twitter.

So what really mattered was self-promotion, good timing, and luck.

> Tl;dr, I spent a few years planning and embarking on personal development
> adventures. They were loosely modeled after the Jungian hero’s journey with
> the influences of Buddhism and Stoicism.

Why does the author have to present his life like one would in a fucking
college essay?

[0] [https://medium.com/@andreas_madsen/becoming-an-
independent-r...](https://medium.com/@andreas_madsen/becoming-an-independent-
researcher-and-getting-published-in-iclr-with-spotlight-c93ef0b39b8b)

~~~
drongoking
> So what really mattered was self-promotion, good timing, and luck.

Yes. He seems like someone who is good at self-promotion and networking. Well,
good for him, but I think he underplays the role these have in his success.

> Why does the author have to present his life like one would in a fucking
> college essay?

I guess that's the self-promotion. And humble-bragging. Like this bit:

"I started working as a teacher in the countryside, but after invoking the
spirit of their dead chief, they later annotated me the king of their
village."

~~~
wigl
> Well, good for him, but I think he underplays the role these have in his
> success.

Exactly. Good for Emil, but it's always frustrating to hear survivorship bias
preaching. Even the interviewer starts off by saying:

"By the way, I really love your CV - the quirks section was especially fun to
read."

It's even more frustrating when I hear non-POC's talk about their journey to
some non-western country (and subsequent conquering of fantastical goals like
gaining the approval of locals) or pursuit of some sense of foreign culture.
It's almost a given that they have internalized and appropriated the ideas
(i.e. Buddhism or even worse post-retreat Buddhism). Good for the author to
receive such positive feedback for such signaling, but it makes me sad to know
that I might not receive the same.

~~~
barry-cotter
> It's even more frustrating when I hear non-POC's

If you want to talk about white people say white people.

------
vector_spaces
> Creating value with your knowledge is evidence of learning. I see learning
> as a by-product of trying to achieve an intrinsic goal, rather than an
> isolated activity to become educated.

> Early evidence of practical knowledge often comes from usage metrics on
> GitHub, or reader metrics from your work blog. Progress in theoretical work
> starts by having researchers you consider interesting engage with your work.

> Taste has more to do about character development than knowledge. You need
> taste to form an independent opinion of a field, having the courage to
> pursue unconventional areas and to not get caught up in self-admiration.

When I study abstract interpretation or lattices, I'm doing so because I find
those subjects interesting and beautiful, and studying math relaxes me. I can
lie to myself and say that it's improving my problem solving ability and that
it's like doing mental yoga and will make me better at my job or some baloney,
but that's not why I do it.

I can spend time with a plant in my garden, take a cutting, root it and
replant it, and watch it grow, learn the ebbs and flows of its watering needs
through the seasons, learn what its seed pods look like, and eventually watch
it die through some misstep of my own or otherwise.

And in doing so, I am learning, and building a mental model for this plant and
an intuition for it, but I'm not "creating value" in some weird capitalist
sense, which I feel always underlies these sorts of opinions about learning
and education, and people who self-identify as "makers" in general. It rubs me
the wrong way because it encourages a very narrow view of the human experience
and what it means to learn and why we should learn.

~~~
itsmefaz
An underrated comment!

------
anentropic
I think most of us here on this site could do the same in terms of learning
and research

Seems to me the difficult part is how to support yourself financially while
spending your time doing interesting learning and research, or how to get paid
to do it

Maybe the most important detail in the story is "He co-founded a seed
investment firm that focuses on education technology" but it is not discussed
further

------
LemonAndroid
I don't see how this is self-taught, as the person got picked up for an
internship and could learn from experts first-handly.

FAKE.

~~~
curiousgal
He also studied at 42, which is most likely why he got picked for the
internship to begin with. I don't get this self-congratulating BS, guy says he
toured the world (good luck doing that with a shitty passport) and was named
king of a village in Ghana (right..). I guess people who get lucky have to
always go to great lengths to justify and spin that.

They're free to do so of course, but they should not give advice based on it.

Regardless of that, I suppose the bar for being a "researcher" has been
stooped so low. According to this guy publishing an ML paper is equivalent to
writing a blog post or making a video about "AI".

------
NWM123
I personally found this article to be very interesting. I don't know much
about AI, but I was fascinated by the discussion of peer to peer educational
system. I believe that they will become more prevalent as student loan
payments cause debt to so much of our population in order to get an education
.

------
jshowa3
I don't know why people think getting a credential does nothing or that people
"copy and paste" the assignments. Sure it may be possible, but what prevents
people from copying and pasting public git repos?

Either way, this whole focus on "portfolios are everything and credentials are
meaningless" spits in the face of all the work I did to get my university
education. And it didn't involve "copying assignments". And you come out with
one hell of a portfolio if you take your education seriously.

I mean I don't think self-educated people are without merit. I happen to think
they're really important. But I only ever see them rag on higher education,
despite them having "never been there".

Just another example of wunderkin super genius knows all because he was able
to follow a non-standard path and make it. Glad he was smart enough to become
a Google employee. But I question whether he should be giving advice on paths
to get there when there's always many paths to a position. And especially
after reading his brief comments on how credentials imply you're a liar.

~~~
JDiculous
> I don't know why people think getting a credential does nothing

Then actually pay attention to the arguments they're making instead of talking
about how offended you are because it goes against your self-interest as a
degree holder. It's not as if the people bashing modern education are some
kind of elusive minority.

I've got a master's degree and I've always though our education system is
stupid, and at least in the U.S. not unlike a giant pyramid scheme given the
cost of tuition these days. Absolutely nothing you learn in a college
education you can't learn yourself for free on the internet.

~~~
mehrdadn
> Absolutely nothing you learn in a college education you can't learn yourself
> for free on the internet.

I don't know who "you" is (perhaps you in particular are very gifted) or what
you personally learned in college, but on my end in college I specifically
took particular classes to learn topics that I had previously tried and failed
to learn on my own, so, if it's meant to be generic, I'm pretty confident your
claim is false.

~~~
throwawayjava
_> ...or what you personally learned in college_

I went to a bad No-Name University for undergraduate and then Top Tier
University for phd school, so I have an unusually representative view here.

I do agree that, for CS, the no-name university was... bad. Fortunately, I
realized this early and did a lot of self-study. I probably learned more
reading taocp and going through MIT open courseware courses in the library
during the evenings than I learned in my actual undergraduate courses.

The mathematics courses, even at No Name, definitely provided me with a better
education than I could have ever gotten on my own. I'm pretty bad at math, it
was my worst subject in high school. So I double-majored in it during
undergraduate. This dovetails with your advice to use university as a time to
learn things you already tried and failed to learn on your own, or which you
otherwise know will be difficult to learn on your own.

The CS education that undergraduates get at Top Tier University is far better
than what I got, even though I worked through that Top Tier University's
online courseware/lecture notes/exercises during undergraduate on my own.

My hot take: college is always worth it, but only if you intentionally invest
in "leveling-up" past your previous potential.

That will happen almost by default at Top Tier unless you're a genius (...but
you'll pay a lot for it). But not if you're going to university at No Name.
So, in that case, students should definitely a) minor or even double-major in
something they're not good at, and b) heavily supplement their CS courses
during evenings/weekends.

Also, this is all highly specific to very self-motivated learners -- the sort
for whom "self-taught" is a reasonable route. I'm one of those people. Over
time, I've realized that we're a very small minority. Our perceptions of what
others are capable of learning on their own, and prescriptions for how others
should learn, are typically quite warped. Most people probably do need
something like a 4 year college degree to become a competent programmer.

------
ptah
> deep learning internship at FloydHub.

nice to be able to work for free and not starve

------
DoctorOetker
This is a great example of how we "collectively" [1] conflate phenomena,
skillsets, ... into one topic: machine learning, AI.

1) There is the general phenomena or collective project, where hardware,
algorithms and human insights are improved to approach the situation of man-
made intelligent machines.

2) There are the people who are designing algorithms, using mathematical
intuition and knowledge, analogies with physics, etc... Most people would
agree these people are doing optimization / machine learning "proper".

3) There are the people working on improving hardware for machine learning /
optimization purpouses, by looking at the most performant algorithms, breaking
them down into primitive operations and requirements for hardware, there are
also people working on the algorithms themselves and finding computational
shortcuts (which can end up in software or hardware, can end up as proprietary
knowledge or common knowledge, ...). The distinction between hard and software
is somewhat blurry, since hardware designers can optimize or implement a
section of software into hardware. A lot of this can still be considered ML
"proper".

4) Then there are the people who apply the ML frameworks and their exposed
choices and settings to a specific problem domain. Many of them don't need to
understand the internals if they don't need state of the art results. Many
would nevertheless benefit from understanding the internals, and the requisite
math. What I propose is to stop calling their activity as Machine Learning,
and instead call it Machine Teaching. They are teachers, and just like elite
schools they can choose which specific type of available student they will
teach, and they can tweak (or filter from a large family of students) which
student they select to teach the task at hand. There are bound to be many
advantages of having actual human teachers get involved in machine teaching.
These people will not be proficient in designing novel families of students
unless they also know the requisite math, and identify those ML papers that
are ML "proper" instead of ML "teacher". When trying to find important
foundational insights in ML "proper" one is typically overwhelmed by a large
surplus of ML "teacher" type papers. These are important datapoints, and
necessary to advance human insight into ML "proper", but they are data, not
knowledge. There are actual ML "proper" knowledge papers out there that
explain why a certain phenomena is such and so, and they get very little
attention because they necessarily lag the breakthrough ML datapoint paper,
and most ML "teachers" don't have the math background to understand them. So
the probability that a given ML "proper" researcher _fundamentally_ improves
the state of the art is much higher than the probability that a given ML
"teacher" will _fundamentally_ improve the state of the art. At the same time
the probability that a given _fundamental_ breakthrough was achieved by an ML
"teacher" is higher than the probability that a given _fundamental_
breakthrough was achieved by an ML "proper" researcher:

P( Breakthrough | Proper ) > P ( Breakthrough | teacher)

while

P ( Teacher | Breakthrough ) > P ( Proper | Breakthrough )

Since most people don't have the broad math / physics / ... knowledge to draw
on, the number of ML "teachers" is much higher than ML "proper" researchers.

[1] well, really, some actors have vested interests in conflating those
together...

EDIT: just to be clear, I am not complaining about ML Teachers, we need the ML
Teachers, and their breakthrough datapoints. What I am complaining about, is
conflating both activities of ML Proper and ML Teaching. This makes it harder
for the few ML Proper researchers to find each other's insights.

