
Research questions that could have a big social impact, organised by discipline - apsec112
https://80000hours.org/articles/research-questions-by-discipline/
======
hiidrew
Love these posts, here's a couple more that are similar:

Patrick Collinson-
[https://patrickcollison.com/questions](https://patrickcollison.com/questions)

Gwern- [https://www.gwern.net/Questions](https://www.gwern.net/Questions)

Alexey Guzey- [https://guzey.com/personal/research-
ideas/](https://guzey.com/personal/research-ideas/)

Anybody have more?

------
glitchc
Sorry, these are actually terrible. Some of the biggest questions off the top
of my head:

1) How do we build machines that repair themselves?

2) How can we get human bodies to automatically repair and regrow damaged
limbs and organs?

3) How do we build an energy source that is the same volume and mass as a
litre of petroleum, is as stable and portable as petroleum but possesses 10x
the energy density?

4) How can we get the human immune system to automatically develop suitable
antiviral antibodies to combat a viral infection?

5) When will heated toilet seats become standard in North American homes (or
at least north of Virginia)?

~~~
Aperocky
Solution to everything:

put human (consciousness) into silicon. Now all problems becomes one
dimensional. (2), (4), (5) becomes moot, (3) is quite unnecessary because time
is now more or less expendable, and we can just work on (1).

------
koeng
As a biologist, the Biology and genetics section was really disappointing.

> What’s the minimum viable human population?

Depends on your technology. If you have frozen sperm and an effective means of
genetic engineering (to fix mutations), the answer is 1.

How about the long term implications of being able to fundamentally engineer
life?

~~~
hyperbovine
Agreed. The questions specific to my field do not seem interesting to me at
all as a researcher. They are either way too vague and ill-defined to qualify
as serious research, or way too niche to have a "big social impact".

Also, IANAB (I am not a biologist), but it surprises me that unravelling the
genetic architecture of complex phenotypes is not represented on this list in
any way, shape or form.

~~~
trenchgun
Make a pull request.

------
sgustard
The year 2020 is providing all the Big Questions we can handle. How do we stop
a pandemic? Climate change driven fires the size of a continent? Reforming the
police to stop racial violence? Fighting bots and trolls that threaten
democracy?

~~~
glial
That last one is easy, at least: ban ad-driven social media as a business
model.

~~~
nine_k
Being ad-driven is a red herring. Trolls were known to exist on totally non-
commercial mailing lists.

Being pseudonymous is key, but it's also the valuable part.

~~~
cameldrv
Being ad-driven is an important component because a big part of the incentive
to maximize engagement is to drive ad impressions. If social media companies
were compensated in some other way, the content might be different.

------
ma2rten
All the computer science questions are about ML. Are there no impactful
computer science problems left outside of ML? I also get the impression that
the author did not really understand the ML/AI research questions. Otherwise,
they would have probably chose more general/high-level questions.

~~~
ForHackernews
The "effective altruism" movement (of which 80,000 Hours is a part) has long
been preoccupied with preventing a malevolent, superintelligent AI from
killing or enslaving humanity (they call this fostering "friendly AI"). Their
position is that this is a low-probability but extremely severe risk that few
people are working on preventing.

Whether AI is really more dangerous than, say, pandemics or asteroids, is left
as an exercise for the reader.

~~~
80386
AI safety isn't an EA "preoccupation"; it's just weird enough and noticeable
enough that it's easy to mistake existence and prevalence. It's also not even
their weirdest position.

The first question on their list is about the 'problem' of wild animal
suffering - and I've personally seen EAs argue that, because some animals are
carnivorous, nature should be destroyed.

That's not even the weirdest position EAs take. Look up Brian Tomasik.
Specifically, his paper about the possibility that _electrons_ might suffer.

Concern about superhuman AI is one thing; bullet-biting utilitarianism is
another entirely.

(This isn't the only place where their philosophical framework is stuck in the
British Empire; they also tend to take a teleological view of history and
moral development, and believe that their views are the self-evident
progression of ethical development that every culture and civilization will
come to eventually. They may not be as bad about this now as they used to be -
there are questions about China now - but I don't think they're quite to the
point of coming to terms with cultural contingency yet.)

~~~
Noos
It's a preoccupation because EA is mostly a rationalist thing, and Elizier
Yudlowsky has had tremendous influence on that movement by being involved with
Less Wrong. His views on AI have kind of become a mainstream position among
them.

80k hours is more a cultural snapshot of the rationalist movement than
anything.

------
zwieback
Needs a section on engineering research, specifically energy. Cleaner energy
and energy distribution is the #1 global concern, in my opinion, and has been
for a long time.

~~~
w1
yes! i know this is a hyperbolic example, but infinite clean energy would
clear up a lot of problems on this list.

------
roenxi
Ha, such a list is a great thing to argue about. But nevertheless, I am
disappointed by the politics section.

The Big Question in politics is, as it has been for 2 centuries now, how do we
deploy the brightest minds on the biggest problems.

Our biggest problem in politics is that decisions have to make sense to the
median voter. If someone comes up with a process that can transcend that
without the well understood failings of dictatorship, that would be game
changing beyond game changing.

The world has too many problems with well known solutions. People keep
reinventing square wheels instead of being honest about what worked and didn't
in the past.

~~~
Sawamara
"The Big Question in politics is, as it has been for 2 centuries now, how do
we deploy the brightest minds on the biggest problems."

I am sorry to have to inform you, but this is a meritocratic, very naive view
of modern societies that every single datapoint points against. That is not
the biggest problem.

The biggest problem is: what to do when there is an ongoing state capture of
your country by oligopolic pan-continental corporate structures?

Stop assuming that we just have to find out a solution. Solutions exist. We
can have co-owned drone farms sustaining entire villages. We COULD have that.
We already have tens of thousands of bright scientists who work on drugs where
the patents go to the corporations, not for those who actually did the
research anyway. Profits, however, are more important to those who make
decisions about our ways of living.

~~~
zozbot234
> We already have tens of thousands of bright scientists who work on drugs
> where the patents go to the corporations, not for those who actually did the
> research anyway.

That's because corporations do the part of drug research that actually
matters, viz. translational research and safety/effectiveness studies. And it
would be just as expensive if non-profits were doing it, so talking about
"profits" is just not relevant.

~~~
ska
Both parts matter, but you are right it wouldn't be significantly cheaper to
do the transnational and regulatory stuff publicly or by non-profits.

What probably _would_ change is the choices on which conditions and drugs to
target, and to what degree. Conceivably could shift from an ability-to-pay
focus to an impact focus.

Exercise left to the reader to decide if that would be desirable and/or likely
to be more successful overall.

------
kinghtown
Extreme life extension looms large in my mind and I don’t have much trouble
imagining that many people hope for this as well. My gut tells me that it’s
possible but my mind says we aren’t the ones to get it.

I feel like the barrier here is more of a political one than being a matter of
research or feasibility. I never quite understood why we are not throwing
money at this issue. I mean I get that we collectively have doubt it could
happen but the alternative is the big sleep anyway so why the hell not try?

Every day we feel tantalizingly closer to solving each of these problems but
each decade seems to pass by quicker than the last. Perhaps we are a terminal
species.

Maybe the real big problem lies in social science or psychology. Better
stewardship or collective modeling. Maybe we should start force-feeding lsd
into people who actively vote against and stand opposed to scientific
progress. There’s got to be more in life than just withdrawing a paycheque or
not and dying eventually. Sinking into the past and eventually erased from
living memory. I once saw a picture of a guy in the 1920s who bred horses in
the Greek mountains. This vague memory is all that’s left of my great
grandfather. I don’t even know his name.

~~~
mc32
I get it but I don’t understand it.

Can you imagine: on the one hand post industrial countries are inhabited by a
bunch or multi centenarians, on the other hand poor countries are
overpopulated by youngsters.

In the post industrial countries their minuscule pool of young people have no
chance to dislodge entrenched old people who have the power.

There is less rejuvenation and less change in these multi centenarian
societies and they become calcified, ossified. Old buggers hanging on for
what? Will their minds be plastic? Sure, I don’t always agree with my young
self or today’s young on all things, but I believe they should have the
opportunity to shape their society in their time without obstructionist old
timers.

~~~
godelski
Well the good news is that historically we've been more progressive than it
takes for people to die. It is still a slow process, but it is clear that
cultural change isn't happening because old people die and young people with
new ideas replace them.

~~~
Barrin92
I don't think that's so clear because historically we've never been in a
situation where the average age of the population moves towards 40-50, or
already is there in some countries. The median age in the US in 1900 was 23.

If the oldest members of the population constitute a majority, and you're
living in a democracy, cultural change is pretty hard to achieve.

Here in Germany, where we have one of the oldest populations on the planet,
politics consists essentially of one issue, social services and pensions.

~~~
godelski
At least here in America it seems like culture has shifted very quickly. The
last 5 years, 10 years, 20 years, 50 years. I can't speak much for German
culture, but I can say with high certainty that German culture is extremely
different than 30 years ago. Yes, there's similarities, but things have
changed quite a bit.

------
hhs
> The replication crisis has cast into doubt important research findings, such
> as the Stanford prison experiment. What other socially important findings
> have been undermined? How should we interpret scientific literature post-
> crisis?

This should start early in middle school and high school, too. Surprised this
idea of replication crisis fails to get more attention.

~~~
rmah
99.9% of teens do not have the knowledge, skills or capability to critically
examine academic research. Hell, most secondary and high school teachers don't
have those skills. Thus, all you can say is "be skeptical". But that is next
to useless if the student has no basis to make an assessment -- all it will do
is erode confidence in the process of science. Instead, IMO, students should
be 1) given a firm foundation of well accepted knowledge and 2) taught how the
process of science works.

As an aside, I'd ask those who promote the idea that children should be taught
"how to think" instead of "facts they can just look up" to think a bit more
deeply about what they are saying. IMO, you cannot reach valid conclusions
without facts. Facts are the foundation of knowledge, synthesis and analysis.
Without facts, one cannot understand a topic or even know what questions to
ask about it. Note, I'm not saying that children should be force fed facts via
rote learning. But consider, what happens when someone believes they can think
for themselves but is ignorant of topic they're thinking about?

~~~
SpicyLemonZest
I'd expect them to iteratively learn the facts of the topic, just as the
people who first discovered those facts would have had to. Many areas of
knowledge are traditionally introduced that way. Of course it can't be the
only teaching strategy, since many things couldn't be learned within a human
lifespan without explicit instruction on the facts, but it plays an important
pedagogical role.

The replication crisis indicates, frankly, that confidence in the process of
science _should_ be eroded a bit. Many people I know treat scientific studies
as true until proven otherwise, and the replication crisis demonstrates this
isn't an accurate assessment.

~~~
scarmig
There's two different kinds of confidence in science. There's the type that
uncritically accepts an article at face value (oranges cause/cure cancer! Time
to ban/require orange consumption!), and there's the type that says "the
scientific community thinks evolution is true, so even if individual articles
can be criticized, on the whole I shouldn't discount it as a theory based on a
single critique or mistake." The former is good to undermine; the latter bad.
But the boundary between them can be murky.

------
iskander
Can someone tell me more about the ideological (theological?) assumptions
underpinning these questions? They seem to come from a very peculiar
understanding of "social impact".

~~~
jessriedel
80,000 Hours would associate themselves with "Effective Altruism"

[https://en.wikipedia.org/wiki/Effective_altruism](https://en.wikipedia.org/wiki/Effective_altruism)

[https://80000hours.org/about/#how-are-you-
funded](https://80000hours.org/about/#how-are-you-funded)

The principles of effective altruism are largely consequentialist,
utilitarian, cosmopolitan, and humanist, although none of those fit exactly
right.

~~~
notahacker
But in practice, this list of questions reflects less the principles of
Effective Altruism and more the overlap between people calling themselves
Effective Altruists and other personal and career preoccupations. That's how
you get four of the six "most impactful" questions in politics and
international relations being about AI and one of the others involving
representation for 'sentient nonhumans'. (Suffice to say this is not a list a
political scientist, or even your average consequentialist utilitarian
cosmopolitan humanist who read more newspapers than AI papers would propose as
research priorities in politics and IR. Probably great research questions to
get you a popular blog and a job in a Silicon Valley research institute
though)

~~~
jessriedel
Although all organizations are influenced by their particular constituency,
the guys at 80k have been led to these causes by particular philosophical and
empirical arguments, which they explain in the website _at great length_ , not
(say) because they all used to be ML researchers. (They do not have technical
backgrounds.) You can certainly disagree with the arguments, and they would be
first to agree that their conclusions (although not premises) are radical.

But more importantly, I think if you just went and talked to random political
scientists, you would find they actually had not thought very hard about what
the most important causes are. If you ask them, they would come up with
something on the spot. Indeed, political scientists don't at all seem to be
the correct experts for this question.

~~~
notahacker
I'm going to go out on a limb and say that political scientists that spend
their entire working life studying political science and teaching students the
most important questions in political science (including ones which aren't
their specialism!), _might_ have put more thought into which political science
questions have impact and which aren't studied enough than was required to
pick three questions from Allan Defoe's paper on AI, once from a CNAS paper on
AI and one from a list already on their website, before moving onto the next
field in the alphabet. The list covers 19 academic disciplines and a third of
their sources are papers mainly about AI!

To be fair to the authors, they openly state right at the beginning of the
article that the scope of the questions they're listing is based entirely on
questions being asked _within_ their community. I'm not saying the questions
aren't worth answering, but a community with less overlap with LessWrong's
'politics is the mind killer' singularity-believers would perhaps choose a set
of 'important questions in politics' somewhat less narrowly focused on AI. And
this community overlap clearly has more of an impact on the authors'
conclusion that the most impactful questions in politics nearly all concern AI
than pure philosophical commitment to maximising the utilitarian value of
their time.

~~~
jessriedel
> I'm going to go out on a limb and say that political scientists that spend
> their entire working life studying political science and teaching students
> the most important questions in political science

Political scientists will know the most important questions in political
science as judged by _intellectual interest_ , but that's just not the
question being asked here. They are asking which questions in political
science will have the biggest impact on the world as judged by a utilitarian
and long-termist framework, and I don't see why political scientists would
have a confident answer to that.

Likewise, I am a physicist with expertise in what questions are of
intellectual interest to physicists, but I don't think that I or my colleagues
have good ready answers to which physics questions will have the biggest
impact on the world.

> And this community overlap clearly has more of an impact on ...

But the community overlap wasn't a random event that is just now influencing
this investigation. The members of the community were attracted to each other
_because_ they were convinced by certain abstract arguments. (LessWrong is
concentrated in Berkeley while 80k Hours is UK based and the members mostly
hail from there and Australia. They found each other through the internet and
through the Oxford philosophy department.) You can certainly disagree with the
arguments, but chalking this up to having overlap with some dorky community is
a cheap ad hominem.

~~~
dragonwriter
> They are asking which questions in political science will have the biggest
> impact on the world as judged by a utilitarian and long-termist framework,
> and I don't see why political scientists would have a confident answer to
> that.

> Likewise, I am a physicist with expertise in what questions are of
> intellectual interest to physicists, but I don't think that I or my
> colleagues have good ready answers to which physics questions will have the
> biggest impact on the world.

While answering the question involves predicting the future in ways no one
should be _overly_ confident in, it's worth noting that social impact
questions of that type are, in fact, within the domain of political science in
a way they are not within the domain of physics, so practitioners within the
two fields aren't exactly similarly situated with regard to the question.

~~~
jessriedel
Sort of. Some political scientists are certainly more likely to estimate the
impact of their policy suggestions than physicsts, mostly because physicists
rarely make policy suggestions. But I don't think they try to survey all
political science questions and systematically compare them along some measure
of impact. I expect the hypothetically disagreement between Dafoe and a random
political scientist is based on a disagreement outside the expertise (the
importance and long-term impact of AI in general). Likewise, physicists
wouldn't tell you much about the impact of their work because it hinges on
things (e.g., econ) outside their expertise.

~~~
adamsea
And is everything you are saying just sort of, your own take on things?

Or do you have some sort of special expertise?

Because sure, what you're saying _sounds_ good, but it's far from a piece of
ironclad logic, so, color me un-convinced : ).

------
friendlybus
The list seems pretty off. It lists the creatine makes vegetarians smarter in
cog sci and psychology. The article that question links to kills the idea in
the headline.

Sentience of animals and non-humans comes up a lot. Which looks less important
than a lot of other areas of research, fusion, physics, ect.

------
ryanmarsh
Biology and genetics, only one focused on genetic engineering and it's for
"crops that could thrive in the tropics during a nuclear winter scenario".
This seems to reflect someone's biases more than anything.

------
Gravityloss
How about chemistry and materials science?

Think how 40 years ago lithium batteries didn't exist, and they only started
becoming commercially available in this millennium.

Are equivalent improvements happening soon?

~~~
fuzzfactor
>How about chemistry and materials science?

There are more problems than there are scientists.

And more scientists than there are labs that can solve problems.

------
seppel
There are a lot of question where I would have problems to asses whether an
answer is a correct/valid one. I'd even argue that different people would
accept different answers as answers. Which in turn make them bad questions to
begin with.

------
SolarNet
It's frustrating that "Computer Science" is prefixed with "Machine Learning,
Artificial Intelligence, and". Ignoring many of the notable research questions
that have nothing to do with intelligence. Ditto the "Statistics and" prefix
to mathematics. Both of which ignore the 90% of the field that isn't the
prefix.

~~~
TheRealNGenius
This is just nitpicking

------
contingencies
I would say that the absence of any overarching anthropological,
epistemological or philosophical basis for these questions is the fundamental
observation that needs to be made. "Big social impact" is hardly a defined
category, let alone necessarily representative of desirable outcome.

As we move from a multiplicity of disparate cultures to a single common
denominating quagmire of internet-unified global capitalism, how do we ensure
that alternative perspectives on nature, society and purpose continue to exist
and are granted reasonable space and resources to self-sustain?

Alternatively stated: how can _homo sapiens_ as a group reliably value things
such as the commons (nature, intellectual heritage, freedom of choice,
individual dignity, etc.) that exist outside of conventional private ownership
and economic rationalism without politically centralizing objective value
thereby capitulating the great strengths known to be associated with
heterogeneity?

------
Balgair
In the Bio and Genetics section there are some good ones and some answers
already (kinda). I'll try to be brief:

> What’s the minimum viable human population (from the perspective of genetic
> diversity)?

~500 people. It's not a 'firm' answer, as the research is still ongoing (aka,
its complicated) but it looks like the number is ~500 but likely a bit more :
[https://en.wikipedia.org/wiki/Minimum_viable_population](https://en.wikipedia.org/wiki/Minimum_viable_population)

> What future possibilities are there for brain-computer interfacing and how
> does this interact with issues in AI safety?

It all come down to scarring. Current attempts typically end in scarring, at
least in the Central Nervous System. We've not got a good way around it yet,
at least for little tetrode like probes you have to insert.

The most exciting (to me) is to use Clarity and Optogenetics. Clarity to make
the brain, well, clear-er. And Optogenetics to stimulate cells with light.
Shine light, nerves fire.

Trouble is the Clarity makes the brain swell ~3x, and nerves are smaller than
the diffraction limit of light, mostly. Maybe attach some iron atoms in the
optogenetic pore and then pull on them in a magnetic gradient, thereby
separating out the frequencies needed. Still it's a _long_ ways off.

> What do we know from animals about how cognitive abilities scale with brain
> size, learning time, environmental complexity, etc.?

There's been some good work out of the Hercule lab (U. Chile I think?) about
this. The trouble is in getting a reliable brain marker for 'learning
ability'. You can use markers for the number of layers of the cortex, and
their cross connectivity, but it's still tough to understand. Turns out, the
'wrinklage' of a mammalian brain is correlated with increasing 'learning-
ness', but atmospheric/hydodynamic pressure plays a huge role in the
'wrinklage'. Sorry, my google-fu is not good today and I don't have the paper
citation

> Why has Mohism almost died out in China, relative to other schools of
> thought?

Damn! Now that is a good question. For those just seeing Mozi for the first
time, he was a defensive siege engineer during the early Warring states period
(~470BC). Guy's ethos would really fit well in Silicon Valley, he was a kinda
engineery-hippy dude, kinda Kevin Kelly-esque:
[https://en.wikipedia.org/wiki/Mozi](https://en.wikipedia.org/wiki/Mozi)

------
adamsea
Does anyone know if these folks got help from various experts in each of these
fields in order to assemble the list?

------
noetic_techy
Some that caught my eye:

"Improve our modelling of impact winter scenarios, especially for 1–10 km
asteroids. Work with experts in climate modelling and nuclear winter modelling
to see what modern models say. "

I think impact scenarios is a definite, but nuclear winter is actually an
extremely questionable "theory". Many people are not aware, but the the
Russian KGB boasted that they literally made this up in order to divert public
opinion in the west against nuclear weapons as NATO rolled out medium range
missiles in Europe back in 1982:
[http://www.rationaloptimist.com/blog/nuclear-
winter/](http://www.rationaloptimist.com/blog/nuclear-winter/) The mushrooms
clouds from nuclear detonations do indeed dissipate over a few days. I believe
subsequent modeling has come up empty even in the worst case scenarios.

"Develop more reliable and tamper-proof measures for so-called ‘dark tetrad’
traits — psychopathy, Machiavellianism, sadism, and narcissism."

I actually like this question and have been thinking the same thing recently.
These traits are very dangerous especially when exhibited by those at the top
of the food chain in both politics and business and need to be identified
early and treated, and not allowed to flourish among those in power (Some good
reads on this is The Sociapath Next Door and the Sociopath Test). Bad actors
have incredibly out-sized influence globally.

"Why have certain aspects of Chinese civilization been so long-lasting? Are
there any lessons we can draw from this about what makes for highly resilient
institutions, cultures, or schools of thought?"

I'd be curious to know what they are referring to here in modern China. Is
this question even relevant post cultural revolution? I think its a more
relevant question for Taiwan that preserved the "old" culture as best it
could.

~~~
jl2718
The personal benefits of power may not be great enough for anybody but full
dark triads to pursue competitively. You can see this easily in first-line
managers. And of course we are programmed to filter for this, so imagine how
deep the deception must go for politicians.

~~~
sudosysgen
Maybe a system where authority is a duty and is not competed for may be a
solution?

~~~
nerdponx
I imagine that some people would still seek positions of authority for its own
sake. Not to mention actual sociopaths who would make use of it the same way
that they do now.

------
BMSmnqXAE4yfe1
What if every scientist stops what he is currently doing and starts working on
General AI? After they complete that task, all the other problems on the list
will take 0.1sec to solve.

------
Kednicma
We've got eugenics, corporatism, sci-fi bait, and worse. I do like the idea of
a field of "China studies" through the very specific lens that they've
sketched, which focuses on the experience of Hong Kong throughout the past
century, but otherwise, ugh, what a series of shallow tropes. Their "climate
studies" list is papering over the fact that those studies were done decades
ago and the conclusions were well-understood. Here's _one_ question in each
field which is better than any of theirs:

Genetics: To what extent does the RNA world (leadzyme, ribozyme, RNA bases)
influence our DNA's actions; is there more beyond genome and epigenome?

Business Development: Enumerate all models of cooperation.

Climatology: Classify the climates of Venus and Mars under anthropogenic
climate change; what will terraforming do to their climates?

Earth Science: Classify the minerals and rocks of Venus and Mars.

Neuroscience: Is demyelination preventable or treatable?

Economics: Are markets efficient? Prove that either P<NP (no) or FP=PPAD
(yes).

Medicine: Can phage therapy prevent an antibiotic-resistant bacterial doom
scenario?

History: Just let historians critique history books for a few hours live on
stage, and broadcast the results. This will more socially impactful than any
specific question or research agenda.

Law: I do have a soft spot for "What rights might digital minds have under
current law?" It's a good question. However, a better one, in terms of social
impact today, might be: How can the law establish effective oversight over its
executors and legislators?

Statistics: Are there tighter bounds on neural net performance than PAC/VC
theory? Alternatively, are neural nets essentially polynomial regression?

Philosophy: The Hard Problem.

Physics: Scale quantum computing to today's classical computational regimes.

Astronomy: Finish building planned telescopes; do the planned experiments have
results which agree with current theories?

Political Science: How can two-party democracies be converted by the will of
the people to multi-party democracies; can Duverger's Law be bent?

Psychology: The Hard Problem.

Philosophy of Science: The Demarcation Problem.

Sociology: How do societies collapse; how are technologies lost? Case studies
won't cut it; we need models.

Mathematics: The Riemann Zeta Hypothesis. If metamathematics is allowed, then
whether P<NP, P=NP, etc.

Research on these lines of thought _does_ have a big social impact today just
by consideration and exploration, and _will_ have big social impacts if they
ever actually answer the questions.

~~~
pkphilip
Wonder why this post is being downvoted

~~~
SilasX
I'll answer: first paragraph was trollish and could have been left off. I'd
flag but for the later stuff being more substantive.

~~~
nerdponx
Trollish? Hardly. It's a strongly-worded criticism of the article.

~~~
SilasX
Yes, and when that harsh working provokes unproductive dialogue, it is known
as being "trollish". If you really don't see what's trollish:

\- inflammatory, dismissive labels without concrete justification: "eugenics,
corporatism, sci-fi bait:"

\- ditto for "what a series of shallow tropes"

\- presumptuous assertion: "Here's one question in each field which is better
than any of theirs"

The HN guidelines strongly advise against doing that.

~~~
Kednicma
This is fair. I started my comment by responding to each section, but I
quickly realized that I wouldn't be able to make a constructive contribution.

First, the most important thing: Note that one of the responses to my original
comment is feeling intense emotional relief at having some uncanny unease put
directly into words. This is not the first time I've responded to this sort of
article, and not the first time I've seen this sort of reply. I gather that it
is _difficult_ for folks to articulate the ambient horror of what this sort of
writing is sketching.

Eugenics is recognizable from phrenology. They want to know why brain size
matters and how to measure intelligence in different parts of the brain. They
also want to know whether FAAH-OUT can make people happier and less sensitive
to pain, and whether "extreme" human life extension is possible. They want to
know how small the human race can get, and what the marginal utility of each
additional person is. Indeed they ask whether utilitarianism isn't the
ultimate ethical method.

Corporatism is most obvious in the framing. Various already-established
consensus positions are turned into but-really or what-about questions, so
that it seems as if climate change is not something that definitely exists,
that we definitely could do something about, and that we definitely are not
doing enough about. Corporations are portrayed as the leaders in innovation,
while governments and the public are incompetent and slow to change.

Why do I say that these positions are tropes? Because I've read enough Dark
Enlightenment literature to recognize its fingerprints, mostly. The authors
are very slanted towards a meritocratic transhumanist utopia, where their very
intelligent observations about the coming utopia will be rewarded with high
stations and praise in the coming utopia. Meanwhile all of the implications
about classism and injustice are carefully worded away so as not to be blatant
and repulsive.

I recognize that you might not like my questions. But you also admit that
they're substantive, and that's good enough for me. Honestly, they're barely
my questions at all; each of them has been open for decades, I think. Maybe
quantum computing is the most recent one to have something interesting happen,
and that's changed our trajectory from asking "can we?" to "can we, bigger?"
and "can we, cheaper?" and "can we, easier?"

~~~
SilasX
Thank you for clarifying. I still think those aren't good reasons to apply the
labels. But at least now you've spelled out your justifications for those
strong claims, which is more productive that the first paragraph of your
original comment, since it allows others to respond. So I do not consider it
trollish when phrased as your most recent comment does.

Also,

>I recognize that you might not like my questions.

My criticism wasn't about any of those.

