
Lessons from My First Two Years of AI Research - tomssilver
http://web.mit.edu/tslvr/www/lessons_two_years.html
======
chrstphrhrt
I really enjoyed this writeup. Having just finished a year in the field on the
engineering side, and reading many papers without an academic background, it’s
funny to recognize who are the “math” types and who are the “biology” types at
work. Puts things in perspective.

I’ve been pushing for more ensembles and multi-label classifiers because I
want to orchestrate the pieces with logic to fill gaps until stats methods
outperform (given enough new data in nice annotation-friendly structures).

The “math” types seem to feel like there’s a neural net solution to every
problem or that we can expand the multi class model to cover more domains
despite being mostly saturated with high accuracy. Sounds awesome but I’m
impatient!

The “biology” folks seem to be most attracted to neatness or parsimony. We had
some great bikeshedding sessions around adjacency list vs materialized path
(ltree) in postgres for label hierarchies. Abstractions can be useful too!

Any tips on being a better experimentalist and pushing academic colleagues
towards better solutions in the field?

------
alphydan
> I once had an occasion to ask a very prominent AI researcher for early
> career tips. His advice was simple: write!

I give the same advice to new PhDs. I wrote mini-overviews of stuff I was
researching, doing and thinking for the first two years. Write it with LaTeX
too. By the time you have to write your actual thesis you can merge papers,
overviews, ideas, and will be done in a few months (instead of years for the
people who didn't write along the way).

~~~
raybb
We're you using any specific app for writing ?

~~~
alphydan
In 2008, just emacs + pdflatex. Some of the bibliography pain is now easier
with apps, but I didn't use any then.

------
yughurt
Wow Tom! Congrats on getting into the program. It was funny seeing your name
on hacker news. Cheers, Shane.

------
graycat
The first and maybe the most important step in research is problem selection.
Pick a good problem.

~~~
Ar-Curunir
there's a lot to unpack in that statement. A good problem is good for many
different reasons: it's not enough that the problem be an important one, it
should also be the case that you have the techniques and knowledge to solve
it; otherwise you'll be banging your head against the wall without success.

As a student, you might know that a problem is important, but you might not
yet have the tools to solve it; you might not even be aware of which tools
_can_ solve it.

~~~
graycat
Uh, when picking a "good problem", part of what makes a problem "good" for the
researcher is that they have a shot at solving it!

------
augbog
> One of the most common and aggravating manifestations of hype in AI research
> is the renaming of old ideas with flashy new terms. Beware of these
> buzzwords -- judge a paper based primarily on its experiments and results.

So true

