
Under-Investigated Fields - ChronoBiologist
https://matthewmcateer.me/blog/under-investigated-fields/
======
Palomides
I think it's weird to suggest that programming languages are under-
investigated, and the discussion he gives of it makes me question his
conclusions about other fields. There are vastly different paradigms
completely orthogonal to the hierarchy in that image the author uses. I don't
think getting people to stop using assembly/C/fortran/cobol/php/whatever is a
research problem.

~~~
semi-extrinsic
> makes me question his conclusions about other fields

I think this list makes no sense unless it is made into a wiki of some sort
with community contributions from hundreds of people.

Under "Physics", he has a sub-heading "Increasing Iteration Speed of
Experimental Physics". This section mentions one random startup. Yet CERN has
hundreds of people actively working on this topic for decades.

They invented, built and implemented the world's first capacitive touch screen
control system in the period 1972-1976 _specifically to answer this need_.
That's just one example off the top of my head.

~~~
randallsquared
> in the period 1972-1976

Did you mean "for decades", or "decades ago"? I'm not sure any notable result
from half a century ago qualifies any of the list he's creating for modern
research (without regard to the quality of that list).

~~~
semi-extrinsic
I'm trying to say they didn't stop doing it. They've done it for decades.

------
stereolambda
> _While it’s easy to point to areas in computer science that might be over-
> researched (after all, Machine Learning conferences often get more papers
> than they can effectively review), there are still areas that are neglected
> with respect to their potential benefit._

It would be worth considering what drives people towards researching
particular areas, even if it might seem kinda obvious. I.e. it might be
tempting that it's visions of fame, loot & prizes, but I think to most people
it is obvious on some level that they personally won't get any prominent
position in these fields. I think it's partly a question of discoverability
(say I'm a student, how do I learn about these topics, and that there are
practical ways to work on them?), partly perceived prestige.

Also, getting into a _particular_ PhD or similar currently means thinking
years in advance -- I'm not talking learning/studying here, but connections,
bureaucracy and applications, having paper "proofs" you know something etc.
You notice some interesting area towards the end of your studies. It's too
late to move even not that much from what you're doing (e.g. move from
cognitive science to computational neuroscience) without wasting additional
precious years. And that for entering not particularly rosy world of academia.

Myself (not academically nowadays), I see myself searching for a middle ground
between overcrowded fields (where I will probably do relative "grunt work" at
best) and fields that are so obscure as to be not viable. The fear of having
no steady income is too real.

------
bra-ket
life-long learning - worth mentioning the work by well known CMU researchers:

Tom Mitchell, Never-ending learning (2015):
[https://www.cs.cmu.edu/~tom/pubs/NELL_aaai15.pdf](https://www.cs.cmu.edu/~tom/pubs/NELL_aaai15.pdf)

Sebastian Thrun, Life-long learning (1995)
[https://www.ri.cmu.edu/pub_files/pub1/thrun_sebastian_1995_1...](https://www.ri.cmu.edu/pub_files/pub1/thrun_sebastian_1995_1/thrun_sebastian_1995_1.pdf)

Also this seminar at Stanford on Lifelong Machine Learning(2013):
[https://www.seas.upenn.edu/~eeaton/AAAI-
SSS13-LML/#Schedule](https://www.seas.upenn.edu/~eeaton/AAAI-
SSS13-LML/#Schedule)

------
crawfordcomeaux
Applied category theory and uncertainty logic could be added to math.

------
wendyshu
Usually things are be under-investigated because they're hard.

~~~
throwawayjava
or useless.

or naive.

~~~
ci5er
Or heterodox.

~~~
throwawayjava
Nothing on this CS list is even close to heterodox... In fact, exactly the
opposite.

------
ImaTigger
This is sort of a ridiculous list. There are, possibly by definition, and
infinitude of under-investigated fields. A more useful list might be a list of
OVER investigated fields, such as PvNP, Deep Learning, Consciousness, fMRI,
...

------
exabrial
What makes a language "more productive"? I feel like this term first appeared
in the age of Ruby on Rails, but I've yet to see any sort of study. Now when i
see the term, it immediately raises suspicions and has the opposite effect
that the writer intended. Obviously the best tool for the job is the one you
know how to use proficiently, but is there a magical computer language that
can turn average programmers into high-performing ones?

------
Der_Einzige
Transfer learning is no longer under-investigated. Just look at how the NLP
and CV communities get state of the art results

~~~
GlenTheMachine
Transfer learning is absolutely under-investigated. The current results in CV
are awesome but they only pertain to CV. There’s little underlying theory that
helps you apply it to other areas. Mine, for instance, which is robotics
manipulation.

~~~
throwawayjava
There's a huge difference between "under-investigated" and "doesn't live up to
the initial hope/hype".

It's possible for something to be over-investigated and also not produce
results. See also: the build up to AI winters.

------
ImaTigger
Coincidentally, I ran in this paper just now, on latin squares (one of the
mentioned fields): [https://malmskog.files.wordpress.com/2011/10/revised-math-
ma...](https://malmskog.files.wordpress.com/2011/10/revised-math-magazine-
may-1.pdf)

------
0xDEFC0DE
Anyone know if there's been research into terraforming via asteroid/comet
impacts and trying to simulate that?

~~~
zamadatix
PBS Space Time covered the idea a bit in a recent video
[https://www.youtube.com/watch?v=FshtPsOTCP4](https://www.youtube.com/watch?v=FshtPsOTCP4).
They had some numbers but I don't think they referenced any formal research in
that one.

------
stebann
That is NOT under-investigated, you just didn't find who is investigating
that.

------
antiquark
> _Computer Science: Existential risks posed by technical debt_

The term "technical debt" has always rubbed me the wrong way.

Most technical decisions were sound... at the time!

I agree, people from 1999 didn't predict what would be happening in 2019. But
why is that considered to be some sort of debt?

~~~
lumost
true tech debt occurs when a business under-invests in their core technology
over a long period of time, or deal with concept drift in its business. I know
of multiple fortune 500's that are reliant on bespoke emulation of hardware
and operating systems that haven't existed in decades, even worse the source
code for the software they're running may no longer exist in any usable form.

In many modern web companies a given project has a useful life of ~3-5 years,
if its still running by year 8 with a team that's been on KTLO a few things
are probably true.

A: No one knows how to productively add features.

B: The business need for the project was much larger than the KTLO funding
would imply.

Odds are at this point there are a long list of user complaints, year+ old
feature requests, and excuses being made to the board for why some initiative
is facing yet another delay.

Perhaps we should be talking about software depreciation rather than tech
debt?

~~~
sddfd
Let's also talk about why LTS is a dangerous idea: It provides an excuse not
to update.

Many tech debt traps start with relying on an LTS version of OS or libraries.
The philosophy behin that is, that software behaves like a chair: You buy it
once, and then you sit can sit on it until it is no longer needed.

A much better analogy is a horse: You need to feed and take care of it daily,
and you need to be ready for it do die when you still need it.

