
The differences between tinkering and research (2016) - hardmaru
http://togelius.blogspot.com/2016/04/the-differences-between-tinkering-and.html
======
cornelis
Thank you OP for your very insightful blog post. You have been better than any
other academic at articulating exactly what happens when I (or one of my
colleagues) interact with the academic world and you made me realize that the
academic world has a set of values that might be good but which I should not
even attempt to incorporate in my own professional life.

I have come to my personal conclusion that there is a difference between
'science' and 'the academic world' in much the same way as there is a
difference between 'religion' and 'the church'. One can be very religious
without going to the church and doing all the rites that are required by the
church. In much the same way, a 'tinkerer' (or 'heathen') can be scientific
without being academic. I believe that wiki summarizes it quite well for me:
"Science is a systematic enterprise that builds and organizes knowledge in the
form of testable explanations and predictions about the universe". There is no
statement about giving proper credit (vanity) or not being allowed to reinvent
the wheel without first making damn sure that you are indeed reinventing.

I want to solve real world problems and the way I want to do that is in a
structural and repeatable approach. Science helps me doing this. Of course I'm
not living in a vacuum, so I will look up as much as I can about the subject
at hand as is reasonable within the available amount of time. Interestingly
enough, I almost never end up with a research paper, but almost always with
blog posts, books, tutorials et cetera. Products don't sell themselves and
neither do research papers apparently. I think there is something seriously
wrong if that is not perceived as a crisis in the academic world.

~~~
togelius
"a systematic enterprise that builds and organizes knowledge" That is exactly
my point. Doing your scholarship is doing your part to organize knowledge.
Systematically building knowledge means building on others, and knowing what
you build on.

So the quote (which you don't attribute to anyone other than "wiki" \- a
citation here would be useful) really proves my point. If you're doing not
doing your scholarship, you are not doing science, or research, you are
tinkering.

Again, there's nothing wrong with tinkering.

~~~
oldandtired
Interestingly enough, when I have found research papers on the subject that I
am looking at, most are so opaque that it was not worth reading in the first
place. More helpful information has been presented by others who you would
classify as tinkerers. They have been clearer in their explanations and much
easier to build upon.

In the sense of presenting the research in a manner that other people can
digest, academia seems to be more of a ancient guild than an organisation for
expanding knowledge.

~~~
randomsearch
Academic papers are written for an audience of experts. If you wish to
understand them, you’ll need to be an expert in the field (you can sometimes
get away with a bit less, depending on the topic). They are not intended for
general dissemination.

~~~
kaybe
Yes, if you cannot follow review papers you need to read up some more. (Look
for review papers if you're new to the area.)

~~~
traverseda
Sure, where do I get review papers without having an affiliation with an
academic institution?

~~~
rmcpherson
Scihub has made virtually all academic papers freely accessible.

------
isthatart
I'm an academic researcher. I disagree with most of the article. The main
distinction I see between "research" and "tinkering" is not what the author
calls "scholarship". From personal experience the only serious distinction is
that "researchers" tend to have a very long time attention span compared with
"tinkerers".

As for the rest, the goal of academic research, lately, seems to be to produce
as many as possible units for the benefit of the publishing industry. In the
case of very successful articles, you can fit all the readers in a (small)
bus. Most of the articles are badly written, because they follow old habits
from the age when printing was expensive. Citations are done in a very
selective way. Not so long ago there was no need to give the source of an
"unpublished" article, "preprint" was enough. Tinkerers work is almost never
cited, even if it is read and used. Blogs? are you kidding me? that's not
"scholarship". Heck, even arXiv still seems exotic in some fields, despite the
fact that it is a good, standard communication mean for the researchers in
other fields. Finally, peer review. How does it work? Researcher B reads what
researcher A reports and expresses an opinion, just like any honest reader
could.This passes for validation. Magic.

In conclusion I believe the future of academic research comes from "tinkering"
with a long time attention span. The dissemination is technically trivial.
Peer review will be supplemented by more rigorous validation (although there
is no absolute solution here). The same kind of validation will be applied to
tinkerers results.

~~~
throwawayjava
_> The main distinction I see between "research" and "tinkering" is not what
the author calls "scholarship". From personal experience the only serious
distinction is that "researchers" tend to have a very long time attention span
compared with "tinkerers"._

Many aspects of "scholarship" become inevitable when "tinkering over a long
timeframe". You build up a knowledge base about the problem and/or methodology
you're tinkering with, you eventually form a community around your tinkering,
etc.

Lots and lots of tinkerers gratify themselves without contributing any modicum
of knowledge to the world. But the most successful tinkerers are almost always
effectively scholars in the sense of this post. In the sense that they're
familiar with a body of knowledge/the people producing that knowledge. And
they contribute genuinely new ways of doing things, or new ideas of things to
do, or excellent executions on existing ideas that achieve the aspirations of
previously articulated ideas. E.g., Linus is certainly a successful OS scholar
in the sense of this post.

 _> In the case of very successful articles, you can fit all the readers in a
(small) bus_

This is off by at least an order of magnitude for any even remotely reasonable
definition of "successful".

 _> Researcher B reads what researcher A reports and expresses an opinion,
just like any honest reader could_

First, the author explicitly kinda-sorta agrees with you here ( _...Third, it
's not really the publication..._), so it's pretty clear that publication
venue != scholarship.

As an aside, peer review != validation!!! This is something that every young
Ph.D. should explicitly learn early on.

~~~
isthatart
Yes, I hope that evolution will produce something new from the available
academic and tinkerer genes. The academic lives longer in a stable medium, the
tinkerer dies young but multiplies faster in chaotic conditions.

------
jacobolus
There are also some unfortunate features of academic research. A big one is
that the product the researcher cares about is papers and citations, not
working software per se. Papers you only write once. You can cherry-pick
examples, work around bugs, do some facile analysis, publish the paper and
then move on never to look at the code again.

This means that code and data is typically hacky, unpolished, poorly
documented, unmaintained, and unsuitable for public use (or often not publicly
inspectable at all). Often the folks doing the grunt work are graduate
students, who leave after a few years, leaving a new batch of graduate
students to deal with the code without much guidance, so it tends to accrete
features but seldom get any refactoring love. Nobody is paying for bug fixes
or software maintenance or documentation.

Someone coming from industry (or even just a tinkerer) who wants to use the
same idea in a production environment often realizes halfway through
reimplementation that it isn’t actually better than existing alternatives, and
that the analysis/benchmarks in the paper were limited or misleading for a
general context. Sorting the wheat from the chaff can be pretty demanding,
ideas can sometimes be obfuscated by abstruse mathematical formalism, and
academic researchers don’t typically go out of their way to make their output
usable for non-academics.

~~~
dpwm
A few years ago I found myself reading pretty much every paper on methods that
could give a better looking solution to "deblurring" an image than the more
established methods. This is really a field which has overlap with many
different areas of research.

You've got traditional iterative methods through to bayesian methods and
machine learning methods. Each of these approaches seemed to be relatively
isolated from each other and largely cited from their own subfield, despite
all solving variants of the same problem. When they did cite outside the
field, they tended to cite less relevant or far from state-of-the-art papers.

What was amazing was the number of papers where you would have a plot showing
how amazing the new deconvolution method was in the presence of noise compared
to other methods, often using a vaguely defined metric. This plot would take
up about half a page, usually following a few example images that were set
very very small, often ~2cm and usually in a grid. That was always deeply
suspicious.

The other thing I remember were models with tons of unexplained parameters,
which were carefully tuned to give good results for that image. But really it
was guesswork. Sometimes the authors would carefully avoid well-cited work in
another field that they could easily have found.

As a tinkerer with a scarcity of time you learn where not to waste your time
and move on.

~~~
joe_the_user
Having tried to find image-processing research/scholarship around 2009, before
the neural net explosion really set-in, I have to say the field was a complete
rag-bag of fragile, vague methods. And efforts to extend these didn't help
them.

Which is to say that no all incremental, scholarly research is necessarily
what's needed. In fact, I think what the OP describes is effectively what
Thomas Kuhn calls "normal science" [1], which can be great until it becomes
terrible - ie, "normal science" periodically reaches crises that require a
different approach (inspiration, insight, iconoclasm, etc, whatever).

But that shouldn't detract from the point that anyone should know what
traditional research, that if someone is going to engage in something other
than normal science, they would benefit from being really aware of why and how
they are doing this.

[1]
[https://en.wikipedia.org/wiki/Normal_science](https://en.wikipedia.org/wiki/Normal_science)

~~~
bigiain
"I have to say the field was a complete rag-bag of fragile, vague methods. And
efforts to extend these didn't help them."

You say that like it's not still state-of-the-art in machine learning and AI
today. (And piloting self driving vehicles around public roads...)

------
joe_the_user
A really excellent summary of the distinctions.

One thing I'd note is that before that Internet era, the scholarship needed to
discover who had done X previously took a really significant amount of time.
The Internet has changed that a lot, at the same that it has tempted people
into tinkering and overall wingnuttery. I remember in 80s, having professor
who functioned as little more than a search-engine, tell him an idea and,
after a couple of puffs of his pipe, he'd give you a list of people who'd
worked on it. He was marvelous despite apparently never having done much more
than providing this kind of information.

Another thing to note that scholarship implies a "community of researchers".
If you are working problem X and people have looked this problem before, you
should think doubly or triply before you claim your step-forward is necessary
because people as smart as you already tried to think of everything. The
article kind of says this but also consider that the people who have worked on
this stuff exist, they have publications, conferences, meetups, etc and you
can go to these or look at these and find-out whether or not your idea was
considered and rejected, think about what's different between your idea and
the idea the "field" rejected and so-forth.

~~~
jamesrcole
> _If you are working problem X and people have looked this problem before,
> you should think doubly or triply before you claim your step-forward is
> necessary because people as smart as you already tried to think of
> everything._

At the same time, such reasoning is used to dismiss or discourage new
attempts, often unfairly:

\- each of the earlier people were approaching that problem from a specific
angle, and new people might be approaching it from an untried angle

\- the earlier people might have had limited time to really investigate the
matter, so there may be unturned stones

\- advances since the earlier work may give newer researchers a leg up.

Note that all of these apply even if (as is likely) the earlier researchers
were very smart.

~~~
joe_the_user
>>If you are working problem X and people have looked this problem before, you
should think doubly or triply before you claim your step-forward is necessary
because people as smart as you already tried to think of everything.

> At the same time, such reasoning is used to dismiss or discourage new
> attempts, often unfairly

Well, if you use such an approach of considering previous work, you will have
the tools to show you why your work shouldn't be dismissed (whether you are
believed or not is a different matter). If you aren't looking at earlier work,
not only do people have good ammunition for dismissing your position but it
really is hard to be sure you haven't just repeated something.

~~~
jamesrcole
> _if you use such an approach of considering previous work, you will have the
> tools to show you why your work shouldn 't be dismissed (whether you are
> believed or not is a different matter)_

That doesn't mean you'll be listened to. You might, but that's no guarantee.

There's also a situation you're overlooking: people being discouraged for the
cited reason _before_ they even start. "What's the point, if people as smart
as you -- and possibly smarter, more senior, and with more of a reputation --
already tried to think of everything?". This can be something that the
researcher can tell themselves.

I was explaining why there can still be room for progress, even if lots of
smart people have already explored the problem.

> _If you aren 't looking at earlier work_

At no point was I suggesting people shouldn't look at earlier work.

~~~
joe_the_user
Well, as Thomas Kuhn said, science tends to proceed incrementally under normal
circumstances. When science reaches a crisis, then there is a tendency to look
towards solutions outside the normal paradigms.

Conveniently, computer programming and invention offer other ways for what the
OP called "tinkers" to demonstrate the effectiveness of their ideas. Science
is imperfect, especially isn't necessarily about the latest, most exciting
ideas but about making ideas reliable and the problem with being willing to
quickly accept even plausible-sounding ideas is that they can also include a
lot of dreck. But hey, there are many alternatives to scientific acceptance.

~~~
jamesrcole
I'm familiar with Kuhn. I'm not saying we should be quick to accept anything,
nor am I making any point related to science vs tinkering. All I'm saying is
that there are two sides to the statement I was originally responding to, and
that they should both be considered.

------
brownbat
> First of all, it's not about whether you work at a university and have a
> PhD. People can do excellent research without a PhD, and certainly not
> everything that a PhD-holder does deserves to called research.

I appreciate this dismissed as a straw man, but there are definitely online
communities (and individuals I know) who reflexively discount comments from
people outside academia.

I understand the instinct, it's a reasonable immune response when everyone
thinks they're an expert because they read a blog post or a pop sci best
seller.

Sometimes, though, the antibodies kill perfectly healthy tissue too, like
valid contributions from bright generalists.

~~~
the_cat_kittles
another factor is that you are very defensive if you spent a bunch of time
getting a phd and someone didnt and is still able to do research as well as
you. and im sure its not very common, since getting good at something requires
lots of time and energy. but i bet it does sometimes, and thats probably one
of the reasons why some of the healthy tissue is killed

~~~
DataWorker
And they have families to feed and their income depends on thier PhD being
enough of a barrier to entry that they can pay thier bills. So much human
progress has been lost because the economic model is broken. We seemed to make
more progress when science was restricted to a smaller group of practitioners,
many of whom were wealthy enough that they could be guided by the science
rather than the funding.

~~~
newsbinator
Also in the early days there was more low-hanging fruit. You could make
scientific discoveries by locking yourself in a room for a week and thinking
deeply about something you read recently. Then come out and test it with items
from your kitchen.

Now it's harder to make breakthroughs without specialized equipment, and the
area of interest has narrowed to a tiny sliver.

------
Radim
Funny thing: do your eyes light up seeing a "scholarly academic paper", or a
"deeply tinkering blog"? Which one have we trained our brains to react more
strongly to, what is more exciting to you? What do you feel you learn more out
of, have more confidence to choose as a starting point for further activities?

OP's distinction between tinkering and research seems wishful thinking at
best.

There are many grievances people have with research (and it's easy to think of
examples), but I find most tend to fall under:

* (unaddressed) RISK: poor result analysis, cherry-picked examples; glossing over edge cases and failure modes; focus on novelty and "academic writing" at the cost of repeatability and conceptual clarity. The Mummy Effect of _research that looks great but crumbles to dust when you actually try to touch it_ : [https://rare-technologies.com/mummy-effect-bridging-gap-betw...](https://rare-technologies.com/mummy-effect-bridging-gap-between-academia-industry/)

* (missing) OWNERSHIP: cadence dictated by grant cycles rather than one's intrinsic motivation to solve a problem; slow or missing feedback cycles; no dog-fooding => poor code and documentation; few incentives to make it easier for others to pick up or validate the results

In fact, both of these seem better addressed by tinkerers than by
institutionalized research. No wonder people react more positively to
"tinkerers" and "hackers"! Academia has a lot of positive inertia (and taxes)
going for it, but all goodwill has its limits.

------
ggm
As a self-defined tinkerer, I have tried the transition to a peer-review
recognised researcher. It is very hard. Science is a very harsh mistress, and
what you want your work to be saying is quite distinct from strong tests into
what it actually says.

The game-like qualities of surviving peer-review are just that: a set of
pretty arbitrary hurdles to cross. If you work in a narrow enough field, your
set of blind review peers is tiny and their critique, even if masked, can seem
very personal at times. I don't like that much.

I don't overall judge my attempt a success. I think this is an endevour best
started early. Late stage career, trying to demonstrate you understand the
rules of science, is hard.

~~~
GuiA
The best way to get your work accepted in a peer reviewed venue (journal,
conference, etc) is to voraciously consume everything that venue has published
before, and make your work match the style observed + copiously quote works
from that venue.

Obviously the intrinsic quality of your work matters. But you’ll more often
see work of marginal value that properly follows the form than work of great
value that does not follow the form. In fact, in my field (academic HCI), I
have never seen the latter, yet see plenty of the former at every conference I
attend.

~~~
f1notformula1
Replying here because this matched my train of thought well.

I'm actually really curious to know if anyone has first-hand knowledge of a
tinkerer (by this blog's definition) getting their work published.

As the GP said, understanding the rules is one thing, but demonstrating their
understanding is another. And as you said, matching the style seems to be more
important than the value of work that is demonstrated.

I see the value in matching terms, jargon, style etc just so the reviewers can
standardize their thought processes. But as someone who's been a tinkerer for
ages now it's hard to change styles for no immediate benefit.

Maybe some examples will inspire me to try :)

~~~
ggm
So my first attempt was 'this is what we do' -a descriptive paper, documenting
a technique. The reviews that came back were basically a mixture of 'so what'
and 'what does it show' and 'show something of significance' -I had tried to
write it as the ur-paper, to establish a technique, and only described in
general terms its applicability. They wanted a lot more demonstrable outcomes.
I walked away. I found peer review very upsetting. It felt like nobody
actually cared about what I was trying to say.

My second attempt was 'this is a specific thing it can do' combined with a
much more rigorous academic 'this is the analytical technique' and 'this is a
polemic about lack of statistical rigour in results, here is our data, you
repeat it' -Which interestingly got panned as 'too much tutorial, too much
argument, more results' -which of course this time, we addressed instead of
walking away. Result? we didn't get in the first journal/conference, we made
the second. I'm reasonably content, but this feels like 'learn the rules of
the game' a lot more than 'say something of merit you find personally
interesting'

Oh, and 'this technique is interesting' doesn't seem to cut it as a paper
subject.

~~~
throwaway287391
Sorry if this comes across as harsh but I'm really not understanding your
grievance.

> They wanted a lot more demonstrable outcomes. I walked away. I found peer
> review very upsetting. It felt like nobody actually cared about what I was
> trying to say.

> Oh, and 'this technique is interesting' doesn't seem to cut it as a paper
> subject.

Well, yeah? To invoke an HN cliche, "ideas are cheap". Why _should_ anyone
else care about your idea if you can't be bothered to show it actually does
something interesting on some specific problem(s) or even motivate why it
might be expected to do something interesting in light of what's already out
there? Without any expectation of experimental validation, conferences would
basically be giant circle-jerks filled with completely inconsequential
"interesting ideas".

And it sounds like you took the feedback from your first round of peer review
and revised your work in light of those critiques and got your resubmission
accepted. That seems like a pretty good experience to me, knowing many
academics with multiple experiences of resubmitting work 3+ times (with new
results and revisions each iteration) before acceptance. I'm not saying that
any peer review process is perfect by any means, but it's a very important
filter and in this case it honestly sounds like the criticism you got when
your paper rejected was pretty fair...

~~~
ggm
I have no grievance. I gave an experience summary. I think my expectations did
not match reality and I was reset.

------
ChuckMcM
I like this differentiation. Early on in the 3d printer craze I would often
say "A lot of people are building 3D printers but only a few people are
engineering them." which was much the same argument. When you "engineer" a
thing you do so understanding the constraints and design it to reliably within
those constraints. When you just build something, as soon as it works once
you're done.

~~~
Animats
Engineering has a specific meaning in that area. It means you have a design
which can be built using standard techniques and will then work.

~~~
ChuckMcM
agreed as long as you also append 'to specifications under all acceptable
conditions.'

It was the latter bit that really challenged 3D printer makers. So many 3D
printers would work one time and not the next because there were so many
parameters that they didn't specifically account for or respond to.

~~~
Animats
Here's one that's very robust.[1]

[1]
[https://www.youtube.com/watch?v=srVHzKsBguM](https://www.youtube.com/watch?v=srVHzKsBguM)

------
1auralynn
As someone who spends a fair amount of time with software developed by
researchers (mostly biochemistry visualization software), that for all the
"research into what has been done", I think there is a huge amount of effort
being duplicated all over the place. In my experience this is due to the
relatively short timelines of grants, the fact that you have to spend most of
your grant money demonstrating that what you created has some kind of
scientific impact vs actually implementing the thing, and, as others have
pointed out, the fact that a lot of the core work is done by diamond-in-the-
rough superstar grad students who will move on after 2-4 years.

------
Jedi72
I wonder if the author would classify the Wright brothers as 'tinkerers'.
Overall I find the tone pretty insulting, nobody believes they are the bees-
knees quite like academics.

------
stared
Some time ago I wrote about academia vs industry in the context of data
science:

In academia, you are allowed to cherry-pick an artificial problem and work on
it for 2 years. The result needs to be novel, and you need to research
previous and similar solutions. The solution needs to be perfect, even if not
on time.

In industry, you should solve a given problem end-to-end. Things need to work,
and there is little difference if it is based on an academic paper, usage of
an existing library, your own code or an impromptu hack. The solution needs to
be on time, even if just good enough and based on shady and poorly understood
assumptions.

So, contrary to its name, data science is rarely science. That is, in data
science the emphasis is on practical results (like in engineering) - not
proofs, mathematical purity or rigour characteristic to academic science.

(from [http://p.migdal.pl/2016/03/15/data-science-intro-for-math-
ph...](http://p.migdal.pl/2016/03/15/data-science-intro-for-math-phys-
background.html#priorities))

~~~
denzil_correa
> So, contrary to its name, data science is rarely science. That is, in data
> science the emphasis is on practical results (like in engineering) - not
> proofs, mathematical purity or rigour characteristic to academic science.

Science in a nutshell is to take a hypothesis, test it rigorously and
validate/invalidate the hypothesis based on the results. The results of this
experiment could be extremely practical. Data science that follows the
scientific experimental design, does science. The one that doesn't is
equivalent to reproduction of past experimental results OR putting a blindfold
on your eyes and throwing darts around until it hits the bull's eye.

~~~
stared
I disagree with this common, yet faulty, explanation. (IMHO coming from people
who have never worked in science and for whom everything R&D looks like
"science!".)

Building engines or cars (involving a lot of tests!) is considered
engineering. And as any serious engineering it's not purely random
trails&errors. A/B tests are not a sufficient criterion to make something
science. And data science (in business) is focused on solving practical
problems (which may or may not be research) rather than open-ended research
(which may or may not be practical).

Source: I worked in academia. I now work in data science. You?

~~~
denzil_correa
I have a Ph.D in Computer Science and I have worked across research labs in
the industry too so I very much understand where you come from. That said, I
don't see the disagreement between us and what part of my explanation you
found faulty.

~~~
stared
Sorry for being combative. It comes from two things:

\- giving talks at universities (then I admit that "data science" is more
engineering, compared to academic science)

\- people who had only experience with software development and then any
research-like stuff (or simple mathematics) suddenly becomes "science!"

------
togelius
I'm the author, and I'm somewhat amused by the negative tone of many of the
comments. Seems there are plenty of people with resentments towards research
here ;)

~~~
strken
I think it's because it treats hackers as deficient academics who can't be
bothered to cite their sources properly or do any research, while there's an
equal and opposite argument that academics are deficient hackers who can't be
bothered to put their code on github or write any documentation.

In reality, neither is really deficient, they're just aiming at different
audiences which need different things.

~~~
togelius
Oh yes, I'm a terrible hacker, I rarely write code anymore. An unfortunate
byproduct of academia is that you get "promoted out of the job", and all the
hands-on work is done by your lab members.

I didn't mean to imply that tinkering was inferior to research - the whole
premise was just to tease out how they're different, with different audiences,
as you say. Interestingly, the discussion here has been dominated by people
who think that I look down on them. People who've discussed it in other fora
have not read the post that way.

~~~
deepnet
Togelius I believe you are in error in Seth Bling's case.

The Show More links under his youtube MarI/O video link directly to the
Stanley Miikkulainen NEAT paper.

and he links the wikipedia page on Neuroevolution
[https://en.wikipedia.org/wiki/Neuroevolution](https://en.wikipedia.org/wiki/Neuroevolution)

which cites two [1][2] of your papers among others :

yet you say : " _it is atrocious because of the complete lack of scholarship.
The guy didn 't even know he was reinventing the wheel, and didn't care to
look it up. _ "

a bit harsh perhaps ? At least mentioning the foundational NEAT paper is not a
complete lack of scholarship.

I feel you have overlooked that Seth does try and provide good citations to
his audience and it would kind if you mentioned him by name in your article
rather than _" some guy "_

He did inspire other twitch streamers to experiment with neuroevolution and
neural nets and benchmark many SNES games.[3] although downstream this follow
up does goes uncited by yourself.

[1] "Neuroevolution in Games: State of the Art and Open Challenges"
[https://arxiv.org/pdf/1410.7326v3.pdf](https://arxiv.org/pdf/1410.7326v3.pdf)

[2] "Countering poisonous inputs with memetic neuroevolution"
(PDF)[https://www.academia.edu/download/30945872/poison.pdf](https://www.academia.edu/download/30945872/poison.pdf)

[3] Mario Bros mari/o
[https://clips.twitch.tv/CourteousEmpathicTruffleCeilingCat](https://clips.twitch.tv/CourteousEmpathicTruffleCeilingCat)

[https://www.youtube.com/watch?v=bRxUQNFxAWc](https://www.youtube.com/watch?v=bRxUQNFxAWc)
Mari/o kart winterbunny

[https://clips.twitch.tv/HilariousPolishedZucchiniHassanChop](https://clips.twitch.tv/HilariousPolishedZucchiniHassanChop)
\- mario kart RNN mariflow

------
snarfy
"The difference between screwing around and science is writing it down." \-
tpai

------
sigi45
Whenever i read a paper, i do have troubles understanding the fine details.

That one math formular which is nowhere explained? Yeah good...

I often enough, get the idea and details like results but thats it.

Those papers are not tutorials and they are not written / done to be
reproduced and easy understandable. They are, in my opinion, written for other
science people who spend there work time doing something similiar or need to
solve the problem what the paper solves and rebuild those results in time
consuming work.

Probably still better than having no paper but still way more work than just
using it.

i do read "the morning paper"
([https://blog.acolyer.org/](https://blog.acolyer.org/)) and he is really good
in reevaluating papers. I'm very curious how much time he spends reading and
analysing them.

------
commandlinefan
> It is easy to miss that someone tackled the same problem as you (or had the
> same idea as you) last year

... and hard to resist the temptation to go jump out the window when you find
out that the thing that you've been trying to figure out for a year has
already been figured out...

~~~
jacobolus
There are independent benefits to struggling with something for a while by
yourself before adopting someone else’s solution. For one thing, you will
understand the problem and their solution a whole lot better.

~~~
colmvp
Pretty much. There are for example studies that show that people who attempt a
problem before finding a solution learn a lot more than people who just look
at the solution.

------
mmilano
The word research is pretty vague in comparison to the context the author
interprets its meaning. He over-reaches on his explanation of what a
researcher is, and leaves too much of a gap between researcher and tinkerer.

~~~
hatmatrix
Maybe the distinction he is trying to make is between academic work and non-
academic work (hence the emphasis on scholarship). Non-academics can do
research but what makes it academic is its placement in the broader context of
civilized endeavors.

------
thethirdone
> Seen as tinkering, that work and video is good work; seen as research, it is
> atrocious because of the complete lack of scholarship. The guy didn't even
> know he was reinventing the wheel, and didn't care to look it up.

I think its important to note that that he does reference NEAT (and the paper
it comes from) and the emulator he used. This is building on what others have
already done. While the 2009 paper from the post author would definitely be
worth note, I feel "atrocious" is a bit harsh.

------
hmwhy
I tend to disagree with those distinctions and I'm inclined to think the
difference is simply that, to a "researcher", doing things is a job; and to a
"tinkerer", doing things is a joy.

## Scholarship

To begin with, there are many "tinkerers" out there who practise good
scholarship and at acknowledge prior art, as well as provide references.
Anyone who is courteous and has the awareness that the world doesn't revolve
around her would honestly document these things even if she is a "tinkerer".

And just as there are "tinkerers" who practise good scholarship, there are
"researchers" who practise bad scholarship as has been pointed out by
@jacobolus.

## Testing

I'm not sure what makes the author thinks that rigorous testing is only a
"researcher" quality. The article just insulted all the "tinkerers" out there
by saying:

> "Here's another big thing. A tinkerer makes a thing and puts it out there. A
> researcher also tests the thing in some way, and writes up what happens."

A "tinkerer" doesn't just make something and puts it out there. A serious
"tinkerer" would not want to put something that doesn't work out there—it's
embarrassing and it's just, wrong. To make something (reasonably complex) that
works usually involves a lot of testing, even if the goal isn't to publish a
scientific paper it would still involve a lot of testing.

## Goals

This bit really lost me. It sounds like a bunch of self-contradictory,
"researcher"-glorifying statements one after another.

> Usually, goals in research are not just goals, but ambitious goals. The
> reason we don't know what the results of a research project will be is that
> the project is ambitious; no-one (as far we know) has attempted what we do
> before so our best guesses at what will happen are just that: guesses.

A "tinkerer" can have ambitious goals. A tinkerer may do something that nobody
has attempted before and, naturally, in that case she would just have to make
guesses.

If, hypothetically, I set a goal to investigate the possibility of using CSS
to make high-quality 3D games that can run at 60 FPS in web browsers, does
that make me a "tinkerer" or a "researcher"? What if, in the process of doing
so, I practise good scholarship, document everything carefully in a well-
organised format, but just simply have no interest of publishing it in a
scientific journal—does that make me a "tinkerer" or a "researcher"?

On this:

> However, if you read a scientific paper those are usually not the stated
> reasons for embarking on the research work presented in the paper. Usually,
> the work is said to be motivated by some scientific problem (e.g. optimizing
> real-value vectors in high-dimensional spaces, identifying faces in a crowd,
> generating fun game levels for Super Mario Bros). And that is often the
> truth, or at least part of the truth, from a certain angle.

The stated reasons in scientific paper is usually after written __after __the
research has been carried out and they are designed to tell a reader why the
works is important. In a space where everyone is struggling to keep up with
publishing so that they can keep their job, who in the right mind would begin
a paper with "our research group embarked on [insert amazing thing] because it
seemed fun to do so and has never been done before"? Actually, I would love to
read papers like that, because most of the time you know it's the usual
standard crap when academics start a paper by stating a problem—it probably
means that they haven't solved the problem and are not even close.

Also, I would like to draw your attention to the last, contradictory sentence.
It seems that the author isn't quite sure either.

## Persistence

If a someone spends 10 years modifying and perfecting something that has never
been attempted before, but does not care for its scientific value, does that
make her a "tinkerer" or a "researcher"?

Bottom line, anyone who appreciates, and wants to do, good work will naturally
do all of those things above—"tinkerer" or not. To throw away sentences like:

> Probably the most importance difference between tinkering and research is
> scholarship.

> A tinkerer makes a thing and puts it out there. A researcher also tests the
> thing in some way, and writes up what happens.

> While tinkering can be (and often is) done for the hell of it, research is
> meant to have some kind of goal.

> Tinkerers are content to release something and then forget about it.
> Researchers carry out sustained efforts over a long time, where individual
> experiments and papers are part of the puzzle.

... is just simply conceited.

------
cosmic_ape
The term research became extremely diluted now. It attempts to put very
different things under one roof, and so is not very useful.

Near the end of that blog post you find:

>>Reading the book, I felt that most of my research is not science, barely
engineering and absolutely not mathematics. But I still think I do valuable
and interesting research, so I set out to explain what I am doing.

Of course, both the blog author and the book author are right in a way. But
probably the useful thing to do here would be to come up with new names for
these different kind of activities, rather than destroy the language by trying
to just place one more thing into "research in general" category.

------
vinchuco
Article author

> The video certainly reached more people on the internet than my work did; it
> makes no mention of any previous work.

Video

>"I didn't come up with this on my own, it's based on an algorithm called NEAT
based on ap a paper by..."

------
traverseda
> Scholarship

>Last year, some guy made an experiment with evolving neural networks for
Super Mario Bros and made a YouTube video out of it. The video certainly
reached more people on the internet than my work did; it makes no mention of
any previous work. Seen as tinkering, that work and video is good work; seen
as research, it is atrocious because of the complete lack of scholarship. The
guy didn't even know he was reinventing the wheel, and didn't care to look it
up.

Then publish your damn results in a damn open journal, dammit.

~~~
sloreti
Also the video discusses previous work starting at 4:40, so I'm confused what
OP is complaining about unless the MarI/O guy re-uploaded with this
attribution.

[https://youtu.be/qv6UVOQ0F44?t=4m40s](https://youtu.be/qv6UVOQ0F44?t=4m40s)

~~~
fwilliams
This is hardly a discussion of previous work. The author of the video states
the paper he implemented.

In any semi-decent peer reviewed venue, you would cite a wide variety of
papers that solve a similar or related problem or introduce a concept related
to the method in the paper.

The related work section of a paper is one of the most important parts since
it puts the research into context. By citing other work, the authors explain
what has already been done, and what contribution their work makes.

A related work section should also illustrate the downsides, limitations, and
differences of other cited research. Limitations of other works are often
poorly understood since very few people have had the time to evaluate them
beyond the initial experiments done before first publication.

Research is not simply about presenting new techniques but also understanding
the trade-offs that arise when choosing different solutions to a problem.

~~~
traverseda
Replicating the results of existing papers is important too

~~~
lorenzhs
That is a different issue, though. The "Related Work" section is there to tell
the reader about related approaches to similar problems, previous research on
the same topic, etc. It's where you distinguish your work from what was
previously done, and justify why this is a meaningful distinction. It's "what
others have done and why we don't just use that".

Reproducibility is important but a completely different topic and I don't know
why you're bringing it up here.

------
deepnet
OP, Togelius, is quite wrong in 1 point at least.

tl;dr Seth Bling directly references NEAT and his links[Show More [1]] link to
2 Togelius papers[13][14] among many others !

To wit, in OP, Togelius mentions Seth Bling's MarI/O[1] as lacking scholarship
because he didn't mention Togelius:

" _seen as research, it is atrocious because of the complete lack of
scholarship. The guy didn 't even know he was reinventing the wheel, and
didn't care to look it up._ "

Rather disdaignfully (sadly/ ironically/ poigniantly) Togelius refers to Seth
not by name but only as " _some guy_ ".

Yet Seth Bling[2] in his short video, mentions and explains NEAT and directly
links Ken Stanley's and Risto Miikkulainen's foundational NEAT paper[10] at
U.texas which Togelius builds on [hint: click SHOW MORE under Seth's video]

And much more, i.e.: ---- the innovative toolchain Seth uses: Bizhawk to
script SNES emulators in Lua [12]!

\---- Seth links the wikipedia page on Neuroevolution[11] which directly
references 2 of Togelius's own papers[13][14]

Seth's [Show More] Links on the MarI/O video[1] are very very comprehensive
and (IMHO) indeed a great example of scholarship !

Togelius is a top flight scholar, and publishes in the field of AI game
players and level generators[15] so I hope he will welcome some random
_tinkerer_ pointing out his error.

[full disclosure: I am a Togelius fan]

To give Togelius his due he does at least link to Seth's MarI/O video.

Perhaps Togelius failed to click / read the Show More links ? The charitable
benefit of doubt principle is accidental oversight.

Seth Bling is primarily a Twitch Streamer[2] and Speed Runner[3] who often
popularises complex programming[4], math and academic ideas[5] to a primarily
young audience using Mario and Minecraft[6] and takes requests via his
subreddit[7]

Seth's very popular Youtube video MarI/O which uses NEAT (Neuro Evolution of
Augmenting Topologies] to learn and play Super Mario World, has led to a huge
number of streamers on twitch playing other MarI/O levels[16], using RNNs,
LSTMs, and other SMW levels and SNES games, like RNN multiplayer x4 Mario
Karts[17].

The OP article is great, academic rigor and bibliographic scholarship might
_sometimes_ differentiate the full time university scholar from the amateur
researcher, enthusiast or tinkerer.

Seth and his many followers may not be 'scholars' as in academic research but
it is an amateur community of mostly youngsters doing their own research by
having fun and sharing ideas, mostly undocumented outside their own transient
twitch streams.

Compare this to the well respected amateur astronomy community and perhaps a
responsible academic approach is to try to reach out and tap in rather than
adding to the distance.

By his own OP standards of Togelius is lacking in that hasn't discovered/
mentioned the work done by the Twitch/ Bizhawk community - it is not entirely
undocumented [16].

Togelius your AI Mario competition[8] is legendary in academic AI circles but
not necessarily widely known by youngsters playing Mario ( and to be fair
AFAIK you don't publish in their forums like twitch) although the code
Infinite Mario[9] you base your work on is by Marcus Persson[10].

Though Persson is better known as Notch [10] ( author of Minecraft ) who is
more famous than perhaps even Alan Turing , Yann le Cun, Jurgen Schmidhuber,
or Geoff Hinton to the under 16 crowd.

Perhaps Togelius could mashup some of his work and do a crossover episode with
Seth or relaunch the Infinite Mario competition[8] / benchmarks to this new
younger crowd evolving Neural Topologies to play Mario games - or even help
document some of their fun.

As public servants funded by tax dollars it is perhaps the responsibility of
the academic to reach out rather than blaming fun loving amateurs popularising
and doing research / 'tinkering' for not putting in the due diligence of
scrupulously mentioning every source.

[IMHO] I agree amateurs don't always do full and proper scholarship and links
are food for hungry minds - but they aren't publicly funded researchers but
amateurs working for love of the subject in their spare time, for free, or for
streaming donations and youtube hits, using transitory, live media and whose
audience must be constantly engaged and may have shorter attention spans than
the average academic.

[full disclosure I am an amatuer neural net nerd]

[1]Seth Bling's MarI/O
[https://www.youtube.com/watch?v=qv6UVOQ0F44](https://www.youtube.com/watch?v=qv6UVOQ0F44)

[2][https://www.twitch.tv/sethbling](https://www.twitch.tv/sethbling)

[3][https://www.youtube.com/watch?v=-spFoon7klA](https://www.youtube.com/watch?v=-spFoon7klA)
sub 1 min SMW via credits warp

[4]Bling's arranging shells in SMW to human inject Flappy Bird code into SNES
[https://www.twitch.tv/videos/57032858](https://www.twitch.tv/videos/57032858)

[4]Bling's Atari 2600 emulator built in Minecraft
[https://www.youtube.com/watch?v=5nViIUfDMJg](https://www.youtube.com/watch?v=5nViIUfDMJg)

[5]Seth Bling's RNN Mario Cart race
[https://www.youtube.com/watch?v=Ipi40cb_RsI](https://www.youtube.com/watch?v=Ipi40cb_RsI)

[6]Bling's Minecraft Redstone tutorials
[https://www.youtube.com/watch?v=DzSpuMDtyUU&list=PL2Qvl4gaBg...](https://www.youtube.com/watch?v=DzSpuMDtyUU&list=PL2Qvl4gaBge1kABr3aBCrMSb94HFLIjbn)

[7][https://sethbling.reddit.com](https://sethbling.reddit.com)

[8]Togelius 2009 Infinite Mario AI competition
[http://julian.togelius.com/mariocompetition2009/](http://julian.togelius.com/mariocompetition2009/)

[9][https://web.archive.org/web/20080423023424/http://www.mojang...](https://web.archive.org/web/20080423023424/http://www.mojang.com/notch/mario/)

[10]Stanley Miikkulainen NEAT paper
[http://nn.cs.utexas.edu/?stanley:ec02](http://nn.cs.utexas.edu/?stanley:ec02)

[11]
[https://en.wikipedia.org/wiki/Neuroevolution](https://en.wikipedia.org/wiki/Neuroevolution)

[12] Scripting Lua in Bizhawk Emulator
[http://tasvideos.org/Bizhawk/LuaFunctions.html](http://tasvideos.org/Bizhawk/LuaFunctions.html)

[13] Risi, Sebastian; Togelius, Julian (2017). "Neuroevolution in Games: State
of the Art and Open Challenges" (PDF). IEEE Transactions on Computational
Intelligence and AI in
Games.[https://arxiv.org/pdf/1410.7326v3.pdf](https://arxiv.org/pdf/1410.7326v3.pdf)

[14] Togelius, Julian; Schaul, Tom; Schmidhuber, Jurgen; Gomez, Faustino
(2008), "Countering poisonous inputs with memetic neuroevolution" (PDF),
Parallel Problem Solving from Nature
[https://www.academia.edu/download/30945872/poison.pdf](https://www.academia.edu/download/30945872/poison.pdf)

[15] Togelius' Game AI book [http://gameaibook.org/](http://gameaibook.org/)

[16]
[https://clips.twitch.tv/CourteousEmpathicTruffleCeilingCat](https://clips.twitch.tv/CourteousEmpathicTruffleCeilingCat)

[16] [https://thenextweb.com/artificial-
intelligence/2018/01/03/th...](https://thenextweb.com/artificial-
intelligence/2018/01/03/this-live-stream-of-ai-learning-to-play-super-mario-
bros-is-awesome/)

[17]
[https://www.youtube.com/watch?v=S9Y_I9vY8Qw](https://www.youtube.com/watch?v=S9Y_I9vY8Qw)

~~~
1wu
Thanks for the commentary togelius [1, 2] and deepnet [above].

Anyone interested in adding a soupçon of scholarship to Seth's project?

Words do mean different things to different people, in different contexts..

There is a recent body of literature that explores the modern "maker" [3]
movement. However, "maker" as a term may not have been a good fit for the OP's
argument [1], which contrasted (academic) researchers with so-called
"tinkerers".

An alternative term for "tinkerer" might be "bricoleur", a loanword from
French. (Roughly, it still means one who tinkers:
[https://en.wikipedia.org/wiki/Bricolage](https://en.wikipedia.org/wiki/Bricolage)
but has other meanings depending on the academic lens.)

Given that we are discussing AIs that play, in the context of education, we
can also go back to Seymour Papert's work on
[https://en.wikipedia.org/wiki/Constructionism_(learning_theo...](https://en.wikipedia.org/wiki/Constructionism_\(learning_theory\))
.

Originally known for work on _Perceptrons_ with Marvin Minsky, AI researcher
Papert later adapted theories from education towards the vision of "learning-
by-making" and the (young) bricoleur [4]. This can approach can be seen in the
evolution from 1960s graphical [turtle]
[https://en.wikipedia.org/wiki/Logo_%28programming_language%2...](https://en.wikipedia.org/wiki/Logo_%28programming_language%29)
to Lego
[https://en.wikipedia.org/wiki/Mindstorms_(book)](https://en.wikipedia.org/wiki/Mindstorms_\(book\))
to modern day efforts to encourage coding-for-kids [5,6,7].

One of Papert's later collaborators, Sherry Turkel, discusses bricolage as it
applies to programming --
[https://en.wikipedia.org/wiki/Bricolage#Internet](https://en.wikipedia.org/wiki/Bricolage#Internet)
.

When it comes to early education, Turkel argues for epistemological pluralism
[8] and cites anthropologist Levi-Strauss in comparing analytic science with a
"science of the concrete".

We can appreciate both Seth Bling's concreteness [9] and Togelius's original
papers for academics. Almost a decade ago, Togelius introduced Super Mario
Brothers as a benchmark for reinforcement learning [10] and, with Karakovskiy,
for AI more generally [11].

deepnet what's your interest in neural nets?

[1] [http://togelius.blogspot.com/2016/04/the-differences-
between...](http://togelius.blogspot.com/2016/04/the-differences-between-
tinkering-and.html)

[2]
[https://news.ycombinator.com/item?id=16744694](https://news.ycombinator.com/item?id=16744694)

[3] For example,
[https://scholar.google.com/scholar?&q=maker+movement](https://scholar.google.com/scholar?&q=maker+movement)

[4]
[http://www.papert.org/articles/SituatingConstructionism.html](http://www.papert.org/articles/SituatingConstructionism.html)

[5] [https://scratch.mit.edu/](https://scratch.mit.edu/)

[6]
[https://www.apple.com/swift/playgrounds/](https://www.apple.com/swift/playgrounds/)

[7] [https://github.com/google/blockly-
games/wiki](https://github.com/google/blockly-games/wiki)

[8]
[http://www.papert.org/articles/EpistemologicalPluralism.html](http://www.papert.org/articles/EpistemologicalPluralism.html)

[9]
[https://www.youtube.com/watch?v=qv6UVOQ0F44](https://www.youtube.com/watch?v=qv6UVOQ0F44)

[10]
[http://julian.togelius.com/Togelius2009Super.pdf](http://julian.togelius.com/Togelius2009Super.pdf)

[11]
[http://julian.togelius.com/mariocompetition2009/](http://julian.togelius.com/mariocompetition2009/)

