
Redesigning the Scientific Paper - jsomers
https://www.theatlantic.com/science/archive/2018/04/the-scientific-paper-is-obsolete/556676/?single_page=true
======
jfaucett
> These programs tend to be both so sloppily written and so central to the
> results that it’s contributed to a replication crisis, or put another way, a
> failure of the paper to perform its most basic task: to report what you’ve
> actually discovered, clearly enough that someone else can discover it for
> themselves.

This is the crux of the of the problem IMHO - at least for the fields I study
(AI/ML). Replicating the results in papers I read, is way harder than it needs
be, i.e. for these fields it should just be fire up a jupyter notebook and
download the actual dataset they used (much harder than it seems to actually
get your hands on). Very few papers actually contain links to all of this in a
final polished manner so that it's #1 understandable and #2 repeatable.

Honestly, I'd much rather have your actual code and data that you used to get
your results than read through the research paper if I had to choose (assuming
the paper is not pure theory) - but instead there is a disproportionate focus
on paper quality over "project quality" at least IMHO.

I don't really know what the solution is since apparently most academics have
been perfectly fine with the status quo. I feel like we could build a much
better system if we redefined our goals, since I don't think the current
system is optimal for disseminating knowledge or finding and fixing mistakes
in research or even generally working in a fast iterative process.

~~~
bjourne
I've had a paper peer reviewed. It was ultimately rejected but I can't help
but suspect that by making all my code publicly available, I hurt my chances
of publication. The reviewers comments were about my coding style, my choice
of build tool (I didn't use make, but something else which is just as easy to
use), the choice of C vs C++...

It's like best practices for computer security -- always strive to minimize
the attack surface. :) Without source code there is much less stuff to
criticize!

~~~
mschuetz
> It's like best practices for computer security -- always strive to minimize
> the attack surface.

I suspect that's also why some papers are unnecesarely verbose and describe
simple things as complicated as possible. Can't criticize something that can't
be understood.

~~~
iamdave
Then why submit it for _peer review_ at all?

~~~
sideshowb
Because we need peer reviewed papers on our CVs!

I also detest simple things made complex, though. In my experience (with has
covered electronics, epidemiology and geography) reviewers tend to pick up on
obtuse issues in text but miss glaring errors in the math. It's sad, and you
can see why someone less than scrupulous would exploit that tendency by over
complicating things. That said I think plenty of authors are honest but just
not very clear thinkers!

------
JorgeGT
> _How to integrate billions of base pairs of genomic data, and 10 times that
> amount of proteomic data, and historical patient data, and the results of
> pharmacological screens into a coherent account of how somebody got sick and
> what to do to make them better? How to make actionable an endless stream of
> new temperature and precipitation data, and oceanographic and volcanic and
> seismic data? How to build, and make sense of, a neuron-by-neuron map of a
> thinking brain? Equipping scientists with computational notebooks, or some
> evolved form of them, might bring their minds to a level with problems now
> out of reach._

The article seems to conflate the praxis of science with the archival of it.
Scientists do all of the above on gigantic clusters, not on an
IPython/Mathematica notebook. The purpose of publishing papers, on the other
hand, is adding to the archival of knowledge, and they can be easily rendered
in a laptop with LaTeX.

And they are excellent at archival, by the way. You can see papers from the
19th century still being cited. On the other hand I have had issues running a
Mathematica notebook from a few releases back -- and I seriously doubt one
will be able to read any of my Mathematica notebooks 150 years from now. The
same with the nifty web-based redesign of the Nature paper that is mentioned:
I bet the original Nature article will be readable 150 years from now, whereas
I doubt the web version will last 20.

~~~
tomkinstinch
A group upstairs at the Broad Institute built out a system to use Jupyter
notebooks for analysis of genomic data, with backend computation happening on
a Spark cluster[1]. Science on large datasets can happen via interactive
notebook. In a connection with a recent GWAS on a massive dataset from the UK
Biobank, the researchers involved decided not to write a traditional
scientific journal article (at least for now) since their analyses will
continue to mature. Instead, they've been posting insights online in blog
form, with associated code on GitHub[2]. It's a daring move toward publishing
at the speed of research. Once their conclusions mature, traditional journal
articles may follow to distill and preserve the key findings. In the mean
time, those in the field can apply the same code to their data, replicate the
analyses, and get an early look at the output of the research. This works
partially because the methods (univariate GWAS) are understood in the field
and the interpretation and rendering of a particular dataset is the science in
this case, rather than a new method (which would still likely warrant a
paper).

1\. [https://hail.is](https://hail.is)

2\. [http://www.nealelab.is/blog/2017/7/19/rapid-gwas-of-
thousand...](http://www.nealelab.is/blog/2017/7/19/rapid-gwas-of-thousands-of-
phenotypes-for-337000-samples-in-the-uk-biobank)

~~~
JorgeGT
> _Science on large datasets can happen via interactive notebook._

I did not claim the opposite, just that it regularly happens without
interactive notebooks. This seems like an interesting project though.
Regarding the blog posts, it seems that there's a bug that makes all the
entries appear as published on September 20, 2017?

------
bcoughlan
I used to work as a software developer for a research institute. I wanted to
open source our research code and tools, and the department head was in favour
of it because it would raise the profile of the research unit.

There were two forces working against us. First many of the grants came from
governments, and a stipulation was that we would devote some resources to
helping startups commercialise the output of the research. Some felt that open
sourcing would remove the need for the startups to work directly with them to
integrate the algorithms, and that this would hurt future grant applications
by making the research look ineffective.

The main opposition though came from PhD and Postdoc students. Most didn't
want anything related to their work open sourced. They believed that it would
make it easy for others to pick up from where they were and render their next
paper unpublishable by beating them to the punch.

Sadly I think there was some truth to both claims. Papers are the currency of
academics, and all metrics for grants and careers hinge off it. It hinders
cooperation and fosters a cynical environment of trying to game the metrics to
secure a future in academics.

I don't know how else you should measure academics performance, but until
those incentives change the journal paper in its current form is going
nowhere.

~~~
neuromantik8086
> Papers are the currency of academics, and all metrics for grants and careers
> hinge off it. It hinders cooperation and fosters a cynical environment of
> trying to game the metrics to secure a future in academics.

I honestly can't contemplate who in their right mind would want "a future in
academics" where academia is defined as a constant stream of metric gaming
rather than actually accomplishing what you originally set out to accomplish.

~~~
TeMPOraL
Imagine it like this: maybe you joined your field (be it software or academia)
with dreams of making the world better. So it shouldn't matter to you _who_
does the problem solving, as long as the problems get solved and the world
gets better in a timely manner. But then, you've found a gig solving an
important problem, with ok-ish pay. And you've also found yourself with dreams
of a home, a spouse, maybe children. And suddenly, it starts to matter to you
whether or not you get paid, so that you can fulfill your dreams. Maybe
someone else could do your important work better than you. But then you won't
have money to support your family. So it's gaming time, and now you end up
focusing on "a future in <your field>".

------
mettamage
Regarding the article itself: Brett Victor is amazing and so is Strogatz. They
are both my heroes actually.

But I do think there is a difference between scientific professionals
communicating amongst each other and scientific communication to the public.
And if mathematicians understood Strogatz his paper at the time when it was
published, and there were enough mathematicians to disseminate the knowledge,
then should you require that you create algorithms as animations?

Part of the reason why mathematicians and computer scientists (as researchers)
conceive of new algorithms in the first place is because a lot of them are
very strong in visualizing algorithms and 'being their own computer'.

Though, if a scientist wants to appeal to a broader group of scientists, then
I'd recommend her or him to use every educational tool possible. For example,
they could create an interactive blogpost a la parable of the polygons[1] and
link that in their paper.

On an unrelated note, it is such a pity ncase isn't mentioned at all in this
article!

Also related is explorabl.es, not everything is science communication in an
interactive way but a lot of it is[2].

[1] [http://ncase.me/polygons/](http://ncase.me/polygons/) [2]
[http://explorabl.es/](http://explorabl.es/)

~~~
hnhn12
>Part of the reason why mathematicians and computer scientists (as
researchers) conceive of new algorithms in the first place is because a lot of
them are very strong in visualizing algorithms and 'being their own computer'.

Can you explain this in more detail?

------
jostmey
We need GitHub for science. But that's not enough. It needs to be combined
with a mechanism for peer-review and publishing that funding agencies will
find acceptable--that's the key.

~~~
WhompingWindows
It's not only about funding agencies, though. Researchers need to be trained
in the use of a GitHub like system - as a former graduate student, I knew NO
ONE using Github. Furthermore, after working in scientific publishing where it
was MS Office or bust, I doubt you will convince Elsevier and the publishing
giants that GitHub for Science is the way to go. How do these huge companies
make money if scientists are sharing their work for free online? Meanwhile,
you have scientists themselves, many of whom are required to hit certain
numbers of articles in certain "prestige tiers" of journals in order to get
tenure. If you're an extremely busy junior faculty member who will be FIRED
when tenure review comes up if you don't have 1 paper per year, will you risk
it on a GitHub scheme? I doubt that.

These problems are not unique to science. Take any large group of humans, be
it government, the military, a company, a set of companies: they will coalesce
onto the use of a certain number of tools, and it will take a LOT of energy
and work to switch tools. Getting an entire industry to switch tools is
extremely hard, perhaps downright impossible, and I'm not sure if it can be
done in the rather slow-moving world of academic science. It simply won't be
possible to enact a huge change on the system with so much institutional
momentum built up.

It seems to me the best solution is to make incremental changes toward a
github like system, training new scientists in grad programs to have better
practices. Maybe we can't get researchers to post their full data set for all
the public, but perhaps it can be made available to non-competitors, those who
researchers aren't actively competing with for the same grants. Maybe
researchers can be required by journals to post the peer-review
comments/responses, as well as the drafts of the article, alongside the
article itself. Maybe we can have researchers posting extremely detailed
code/methodological information alongside the article, without forcing them to
give over the entire dataset to other researchers?

Finally, maybe the whole incentive structure of academic science needs to
change: maybe articles/publishing metrics should be de-valued in favor of
teaching skills, mentorship ability, and collaboration within and across
disciplines?

~~~
chongli
_I doubt you will convince Elsevier and the publishing giants_

Who says we need to convince them? How about we leave them behind? They are
rentiers, gatekeeping society's access to publicly funded scientific
knowledge. I can't think of a reason why society should allow this hostage
situation to continue.

~~~
anigbrowl
OK, but you need to figure out a way to take over the universities then
because scientists are also animals who need food and shelter etc. etc. and
depend on grants, stipends, and salaries to buy those things.

I'm not saying this to be dismissive - I'm strongly in favor of faculties
organizing to unseat administrators from their privileged positions. It's
baffling to me that bright minds on campuses complain at length about the
state of higher education but seem oddly averse to _doing_ anything about it.

------
yiyus
I do not agree that the scientific paper needs to be replaced. It should be
complemented with the help of new tools, that is a very good thing, but I
still want the article.

I work everyday with papers from decades ago, and I hope people will work with
my papers in the future. How can I guarantee that researches of 2050 will be
able to run my Jupyter notebooks?

Moreover, it is not uncommon to not be able to publish source code. I can
write about models and algorithms, but I am not allowed to publish the code I
write for some projects.

------
aplorbust
"... the skill most in demand among physicists, _biologists_ , chemists,
geologists, even anthropologists and research psychologists, is facility with
programming languages and "data science" packages."

If I wanted to prove to someone this statement was true, what would be the
most effective way to do that?

Is author basing this conclusion on job postings somewhere?

Has he interviewed anyone working in these fields?

Has he worked in a lab or for a company doing R&D?

How does he know?

What evidence (cf. media hype) could I cite in order to convince someone he is
right?

When I look at the other articles he has written, they seem focused on
popularised notions about computers, but I do not see any articles about the
academic disciplines he mentions.

------
cowpig
GitXiv very much worth taking a look at if you're into this kind of thing:
[http://www.gitxiv.com/page/about](http://www.gitxiv.com/page/about)

edit: as is Chris Olah's Distill project:
[https://distill.pub/](https://distill.pub/)

------
Myrmornis
How well does it work to version control Mathematica notebooks in git? For
example, is it possible to get meaningful textual diffs when comparing two
versions of a mathematica notebook, and can git compress them enough to keep
repo size down?

With iPython this is also an issue -- tracking code in JSON is much less clean
than tracking code in text files.

It's interesting that Mathematica and iPython both left code-as-plain-text
behind as a storage format. I wonder if it would have been possible to come up
with a hybrid solution, i.e. retain plain-text code files but with a
serialized data structure (JSON-like, or binary) as the glue.

~~~
henrikeh
I use Mathematica daily and frequently store large-ish notebooks in Git. The
format is textual, but the diffs are filled with a lot of noise.

------
hprotagonist
as a practical matter, papers will remain relevant as long as they are the
metric by which grant applications and tenure decisions are made.

as a philosophical matter, for computation heavy fields, i would love to see
literate programming tools become _de rigeur_ in the peer-reviewed
distribution of results. In some fields (AI) this basically happens already —
the blog post with code snippets and a link to arxiv at the end is a pretty
common thing now.

~~~
goerz
Papers (and PDFs) are relevant because they are easy to organize and archive,
essentially in perpetuity. Source code is too, so nothing wrong with a "Github
for Science". Notebooks, blogs, or interactive dashboards, on the other hand,
are an amazing tool both for research and for communication, but they are far
more ephemeral than a paper. They need a large overhead to keep them running
that cannot be sustained over decades or centuries. Typically, you'll have
lots of trouble re-running a 5 year old notebook. That's not to say they're
useless, e.g. as supplementary material (quite the opposite). They're just not
going to replace papers anytime soon.

~~~
hprotagonist
Those are good points. Journals have to care about this, too, these days --
supplementary information now routinely includes videos (hope you have the
right codecs!), word documents, audio, and full or redacted data sets.

I think what appeals to me about literate programming style is that it
encourages a return to a more clear and expository style of writing, which has
been squeezed out of scientific writing in journals over the years. I don't
care what instantiation is required to produce a more uniformly clear and
cogent document, I just care that it happens.

------
kemiller
As is usual for Stephen Wolfram, he has a point, and then blunts it by trying
to own the whole thing. Edit: to expand, part of the answer to his question,
why don't more people do this, is that it requires his expensive proprietary
software. Scientific papers are (nominally at least) a commons.

------
vanderZwan
Aside from my other comment, I think that any discussion about the scientific
paper and the way knowledge is communicated is incomplete without a mention of
Nick Sousanis' _Unflattening_. It is a thesis for a Doctor of Education degree
about this very topic, that practises what it preaches by being written as a
comic book.

[http://www.hup.harvard.edu/catalog.php?isbn=9780674744431](http://www.hup.harvard.edu/catalog.php?isbn=9780674744431)

------
sophacles
This is a really interesting article. The use of jupyter as a publication
mechanism is a really neat idea! I think this path will be fruitful, and I am
all for it. I do think however that some low-hanging fruit should be addressed
in parallel - stuff that makes looking through the existing work a total pain:

* Date of publication and dates of research should be required in every paper. It's really difficult to trace out the research path if you start from google or random papers you find in various archive searches. Yes that info can be present but often its in metadata where the PDF is linked rather than the PDF itself. Even worse is the "pubname, vol, issue" info rather than a year that you get... now I have to track down when the publication started publishing, how they mark off volumes and so on. I just want to know when the thing was published.

* Software versions used - if you are telling me about kernel modules or plugins/interfaces to existing software, I need to know the version to make my stuff work. Again - eventually it can be tracked down, but running a 'git bisect' on some source tree to find out when the code listings will compile is not OK.

* actual permalinks to data, code, and other supplimental information. Some 3rd party escrow service is not a terrible idea even. I hate trying to track down something from a paper only to find the link is dead and the info is no longer available or has moved a several hour google journey away.

------
ivotron
shameless plug for the Popper Convention and CLI tool
[http://github.com/systemslab/popper](http://github.com/systemslab/popper) .
Our goal is to make writing papers as close as possible to writing software
(DevOpsify academia) but in a domain-agnostic way.

------
mettamage
I love science, but I have a lot of issues with it lately. I'm going to
express some of them since they're related to this topic.

The basic function of a scientific paper is understanding and reproducibility
(inspired by jfaucett his comment).

I wonder, is reproducibility necessary? Is it even possible when things get
really complex? Isn't consensus enough? I feel in the field of psychology (and
most social sciences) that is what happens. I suppose consensus can be easily
gamed by publication bias and a whole slew of other things. So I suppose as
jfaucett puts it, a "discover for yourself" type of thing should still be
there. I wonder how qualitative research could be saved and if you could call
it science. In Dutch it is all called "wetenschap" and "weten" means to know.

But how should we go about design then? HCI papers use a _a lot of design_ of
design that is never justified. The paper is like: we build a system, it
improved our user metrics. But is there any intuition or theory written down
as to _why_ they designed something a certain way? Not really.

I suppose one strong way to get reproducibility is by getting all the inputs
needed. In a psychology study this means getting a dataset. Correlations are
fuzzy but if I get the same answers out of the same dataset, then the claims
must be true for that particular dataset.

Regarding design and qualitative studies, maybe, film _everything_? The
general themes that everybody would agree upon watching _everything_ would be
the reproducible part of it?

Ok, I'll stop. The whole idea of that a paper needs to satisfy the criterion
of reproducibility confuses me when I look at what science is nowadays.

~~~
Ace17
> I wonder, is reproducibility necessary? Is it even possible when things get
> really complex? Isn't consensus enough?

If my results are not reproductible, then I'm basically asking for your trust.
So now, instead of actually doing the experiment, there's an incentive for me
to forge my results, And they don't have to match reality anymore, by the way.

Gotta go ; back to working on my paper about psychic powers.

------
laderach
Sometimes you don't need to redo everything from scratch to change things.

There are a number of problems in scientific publishing. Two big ones are:

1) Distribution hurdles and paywalls imposed by rent seeking journals - who
knows how much this has prevented innovation and scientific advancement in the
last 20 years

2) Easily replicating experiments / easily verifying accuracy and significance
of results - this is related to for instance making data used in research more
easily accessible and making it easier to spot p-value hacking

Fixing these might not require a completely new format for papers. Or it
could. I can envision solutions both ways.

I really like what the folks from Fermat's Library have been doing. They have
been developing tools that are actually useful at the present time and push us
in the right direction. I use their arXiv chrome extension
[https://fermatslibrary.com/librarian](https://fermatslibrary.com/librarian)
all the time for extracting references and bibtex. At the same time they are
playing with entirely new concepts - they just posted a neat article on medium
about a new unit for academic publishing
[https://medium.com/@fermatslibrary/a-new-unit-of-academic-
pu...](https://medium.com/@fermatslibrary/a-new-unit-of-academic-publication-
on-twitter-cdda2479091e)

------
omot
> His secret weapon was his embrace of the computer at a time when most
> serious scientists thought computational work was beneath them.

They still think this.

------
vanderZwan
Interesting timing: for the last two years I have worked for a research group
headed by Sten Linnarsson at the Karolinska Institute[0]. I was specifically
hired to build a data browser for a new file format for storing the ever-
growing datasets[1][2][3]. The viewer is an SPA specialised in exploring the
data on the fly, doing as much as possible client side while minimising the
amount of data being transferred, and staying as data-agnostic as possible.

Linnarsson's group just pre-published a paper cataloguing all cell types in
the mouse brain, classifying them based on gene expression[4][5]. The whole
reason that I was hired was as an "experiment" to see if there was a way to
make the enormous amount of data behind it more accessible for quick
explorations than raw dumps of data. The viewer uses a lot of recent (as well
as slightly-less-recent-but-underused) browser technologies.

Instead of downloading the full data set (which is typically around 28k genes
by N cells, where N is in the tens to hundreds of thousands), only the general
metadata plus requested genes are downloaded in the form of compressed JSON
arrays containing raw numbers or strings. The viewer converts them to Typed
Arrays (yes, even with string arrays) and then renders nearly everything on
the fly client-side. This also makes it possible to interactively tweak view
settings[6]. Because the viewer makes almost no assupmtions of what the data
represents, we recently re-used the scatterplot view to display individual
cells in a tissue section[7].

Furthermore, this data is stored off-line through IndexedDB, so repeat
viewings of the same dataset or specific genes within it does not require re-
downloading the (meta)data. This minimises data transfer even further, and
makes the whole thing a lot snappier (not to mention cheaper to host, which
may matter if you're a small research group). The only reason it isn't
completely offline-first is that using service workers is giving me weird
interactions with react-router. Being the lone developer I have to prioritise
other, more pressing bugs.

In the end however, the viewer is merely a complement to the full catalogue,
which is set up with a DocuWiki[8]. No flashy bells and whistles there, but it
_works_. For example, one can look up specific marker genes. it just uses a
plugin to create a sortable table, which is established, stable technology
that pretty much comes with the DocuWiki[9][10]. The taxonomy tree is a simple
static SVG above it. Since the expression data is known client-side to
generate the table dynamically, we only need a tiny bit of JavaScript to turn
that into an expression heatmap underneath the taxonomy tree. Simple and very
effective, and it probably even works in IE8, if not further back. Meanwhile,
I got myself into an incredibly complicated mess writing a scatterplotter with
low-level sprite rendering and blitting and hand-crafted memoisation to
minimise redraws[11].

Personally, I think there isn't enough praise for the pragmatic DocuWiki
approach. My contract ends next week. I intend to keep contributing to the
viewer, working out the (way too many) rough edges and small bugs that remain,
but it won't be full-time. I hope someone will be able to maintain and develop
this further. I think the DocuWiki has a better chance of still being on-line
and working ten years from now.

[0] [http://linnarssonlab.org/](http://linnarssonlab.org/)

[1] [http://loompy.org/](http://loompy.org/)

[2] [https://github.com/linnarsson-lab/loom-
viewer](https://github.com/linnarsson-lab/loom-viewer)

[3] [http://loom.linnarssonlab.org/](http://loom.linnarssonlab.org/)

[4]
[https://twitter.com/slinnarsson/status/981919808726892545](https://twitter.com/slinnarsson/status/981919808726892545)

[5]
[https://www.biorxiv.org/content/early/2018/04/05/294918](https://www.biorxiv.org/content/early/2018/04/05/294918)

[6] [https://imgur.com/f6GpMZ1](https://imgur.com/f6GpMZ1)

[7]
[http://loom.linnarssonlab.org/dataset/cells/osmFISH/osmFISH_...](http://loom.linnarssonlab.org/dataset/cells/osmFISH/osmFISH_SScortex_mouse_all_cells.loom/NrBEoXQGmAGHgEYq2kqi3IExagZjwBYI0R58oA7AVwBs6UMnt5FZ5gB2J1GADl6kYFavUZsWbDjACcQmOyFp42AKy9mq6Z3nw~SfSmFwCepsm0YZwcwaX7S0ZCTTgyAWlgA6Dq0REiABssIhqQeoo3viyRLJq_HEhsvhcatCURFDYTLIqCJZ4mIrZeIRkpvC0DBZS1pxGBo3CzRYmlNWMlEhZiOaIyPhsOZHYOEHQbKRAA),
[https://i.imgur.com/a7Mjyuu.png](https://i.imgur.com/a7Mjyuu.png)

[8]
[http://mousebrain.org/doku.php?id=start](http://mousebrain.org/doku.php?id=start)

[9]
[http://mousebrain.org/doku.php?id=genes:aw551984](http://mousebrain.org/doku.php?id=genes:aw551984)

[10]
[http://mousebrain.org/doku.php?id=genes:actb](http://mousebrain.org/doku.php?id=genes:actb)

[11] [https://github.com/linnarsson-lab/loom-
viewer/blob/master/cl...](https://github.com/linnarsson-lab/loom-
viewer/blob/master/client/plotters/scatterplot.js)

------
tensor
This title is horrible hyperbole. Science is more than just machine learning.
Hell, even if we just constrain ourselves to "computer science" probably half
of it is just math, for which the scientific paper is definitely not
"obsolete" nor even deficient in any way.

But outside of computer science you need laboratories to replicate
experiments. Scientific papers are perfectly fine vehicles to record the
necessary information to replicate experiments in this setting. Historically
appendices are used for the extended details. And yes, replication is hard,
but it's part of science.

~~~
jacobolus
What makes you think math papers wouldn’t benefit from having interactive bits
in the middle? Obviously isn’t relevant for all math papers, but often would
be extremely helpful. I read a lot of technical papers, and it is quite
frequent that I will need to spend an hour or two decoding some formal
mathematical statements whose basic idea/intuition could be more clearly
conveyed pictorially in a few minutes.

Of course, making interactive diagrams often takes dramatically more work than
sketching pictures with a pen (or just writing down equations), and
mathematicians are not typically trained to do it, so it would be an uphill
slog for many. But I would love it if there was more funding/prestige/etc.
available for mathematicians to make their papers more accessible by adding
better visuals.

------
awll
This reads like an ad for Mathematica

~~~
dang
Swipes like this break the site guidelines, including these important ones:

"Please respond to the strongest plausible interpretation of what someone
says, not a weaker one that's easier to criticize. Assume good faith."

"Please don't post shallow dismissals, especially of other people's work. A
good critical comment teaches us something."

Would you please (re-)read
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)
and not post like this here?

------
sappapp
Average clickbait

~~~
dang
Maybe look a bit closer? I slog through "average clickbait" for hours a day in
the hope of sparing this community of it, and can assure you that is far from
the case.

