
Is There a Limit to Scientific Understanding? - Hooke
https://www.theatlantic.com/science/archive/2017/12/limits-of-science/547649/?single_page=true
======
Animats
This is mostly a problem in biology. Biology doesn't have parsimony. There's
no evolutionary drive towards simplicity. Biology is a collection of evolved
patches. It can have much higher complexity that it would seem to need.

Yet human DNA is only about 4GB. That's big, but not so big as to be beyond
analysis. Without computer assistance, getting a handle on it would be
hopeless, but we're past that point and making steady progress.

(How complicated is physics? Things looked good in the era of Maxwell's
equations, but it's become much messier since. In physics, much is out of
reach of experiment due to being either too small (superstrings, if they
exist) or too big (where's the missing mass and the antimatter? That's not an
understanding problem. That's lack of experimental evidence.)

~~~
whatshisface
> _There 's no evolutionary drive towards simplicity._

There's a drive towards smallness (as in, lower food requirements) which has
probably made simplicity a lot less bad than otherwise.

>In physics, much is out of reach of experiment due to being either too small
(superstrings, if they exist)

In the era of Maxwell's equations, _atoms_ were too small. Never under-
estimate the power of cleverness and indirect observation! (Now days we can
image atoms directly: but we needed to get our understand of them elsewhere in
order for our technology to advance that far.) The jury's still out on
strings; although plenty of people claim that we won't ever reach the energy
scales necessasary to _totally_ rule it out, we still might find something
predicted by a lower-energy version, and maybe one day we'll uncover a new
theoretical result that gives us an avenue we can't currently see.

> _or too big (where 's the missing mass and the antimatter?_

That's not too big, we're expecting the answer in the form of slight
asymmetries in particle interactions like the ones we're studying in the LHC.

~~~
Godel_unicode
> There's a drive towards smallness...

Which dynamically competes with a drive towards largeness (eating bigger
prey/resisting bigger predators).

------
hliyan
This seems to be a problem of constructing approximate mental models to
explain "emergent" properties of complex systems composed of interacting
simple elements (such as atoms). I would like to think that such explanations
aren't always necessary, so long as a computer simulation of the simpler (and
well understood) elements can reproduce the emergent behavior. It seems to be
a limitation of human working memory rather than some theoretical limit to
comprehension.

~~~
da_chicken
No, you're not thinking big enough.

Think of it like this. You've got a dog. Your dog is very smart. She knows her
name, she does tricks, she has a pretty apparent understanding several spoken
words, tonal inflections, and body language, and can even catch a frisbee out
of the air when you throw one. However, no matter what you do, no matter how
hard you or she tries, your dog will never understand calculus. A dog is
simply not capable of understanding what calculus is or what it is for or how
it describes relationships of objects in Newtonian physics. There is a
fundamental physiological limit to her intelligence, and topics like calculus
are beyond it.

Humans are much more capable than dogs. However, humans are still physical
beings. As such, they have physical limits to brain size, and physical limits
to brain activity. We also have a limited life span, so if something exists
which might be comprehensible to a 200 year old human, we may never know it.
It is most probable that there are concepts, topics, and systems that are
simply too complex for human intelligence to understand even if we had access
to the equivalent of a Calculus For Dummies.

Worse, human brains are limited to their senses, and scientific understanding
is contingent on both observation and repetition. A system can only operate on
the data it has available to it. If something exists which either cannot be
perceived or cannot be repeated, then we have essentially no capability of
even knowing that we don't understand it. The fact that some of the most
complex systems have been understood though abstraction, encapsulation, and
modeling doesn't mean that _all_ possible systems will behave that way or can
be understood that way or, frankly, even that the knowledge we gain about
systems that way can truly be called "understanding." How often do we say, "It
wasn't until I experienced it myself that I understood"?

As far as AI, well, we don't even know if they're capable of consciousness or
sentience, let alone sapience. They may just be arbitrarily complex systems
good enough to simulate human behavior and fool a Turing test. No, passing a
Turing test doesn't mean you're conscious or sentient; it means you're not
definitely non-sentient or inanimate. And if the basis of everything an AI
knows is what they learned from us using algorithms we decided to give them,
what does that mean they won't be capable of just because we're the ones that
taught them?

TLDR: The universe is not obligated to function in a manner comprehensible to
human intelligence.

~~~
cutcss
False; mostly because a simple analogy can make one understand at least
slightly topics that one has no prior knowledge; dogs cannot understand even
the simplest of analogies, mostly cause they have no language.

And even if that argument isn't good enough; the dog has no tools to escape
its constrains; humans do; we are messing with our brains constantly looking
for a way to make it better; may it be by genetic therapy (DNA mods),
chemically induced (drugs) or hardware implants (artificial neurons).

~~~
libraryofbabel
As I read it, the point of the dog story is really to suggest that there are
probably limits to understanding at every level of intelligence. Sure, humans
have a lot of cool tools for thinking about the world that dogs don't have
(analogies, language, symbolic thinking, whatever). But it's only a kind of
happy accident that they have also made us pretty good at doing science. Is it
really so unreasonable to think that there might be things about the physical
world that we can't understand with these tools?

> we are messing with our brains constantly looking for a way to make it
> better; may it be by genetic therapy (DNA mods), chemically induced (drugs)
> or hardware implants (artificial neurons).

What makes you so sure humans can bootstrap their way into ever higher levels
of intelligence, _ad infinitum_? Sure, it's possible to take that view, but I
would say it's incredibly optimistic. And not one of those ideas you mention
has yet made humans any better at doing science (well, maybe 'drugs' \-
scientists do love their coffee).

~~~
cutcss
We don't need it to be ad infinitum; we only need it high enough to understand
that which we still don't.

------
dwaltrip
The most obvious limits are observational limits and
comprehension/interpretation limits. This first is determined by the tools
available, and thus technological and engineering advancements extend this
limit. Of course, there are inherent difficulties that accumulate as we try to
peer further and deeper, and I think it is likely we hit some practical
barrier that is virtually impossible to get past. If for example, a thousand
years from now, we need the energy from a million suns to fully verify that we
have reached the bottom layer of physics, we may never be able to pull that
off.

Comprehension and interpretation limits seem fuzzier and trickier to speculate
on -- but they definitely exist. We are running up against many of them right
now with the complex systems we attempt to model. It seems that software tools
and AI developments are the most obvious ways to continually push this
forward, but there may be some deep difficulties and pragmatic barriers
lurking here as well.

There is a lot of room before we start hitting serious limits and overwhelming
pragmatic difficulties. Once we have a highly utilized Dyson swarm, then we
can reevaluate.

~~~
ralfd
[http://slatestarcodex.com/2017/11/09/ars-longa-vita-
brevis/](http://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/)

~~~
dwaltrip
Very interesting essay, thanks for sharing. I really should read Scott
Alexander more often.

------
digitalmaster
“The saddest aspect of life right now is that science gathers knowledge faster
than society gathers wisdom.” - Isaac Asimov

~~~
PoachedSausage
I've not heard that quote before, it sums things up nicely. Most of our
problems are political/social.

The phrase "wise guy" is even an insult.

~~~
ZenoArrow
> "The phrase "wise guy" is even an insult."

When wise guy is used as an insult, its meaning is close to that of smart
arse, in that it implies someone is acting smart to belittle others. Whether
that's the intention or not, that's what's being implied. If you take the ego
boosting aspects out of it, then the term 'wise guy' is not derogatory.

[https://en.oxforddictionaries.com/definition/smart-
arse](https://en.oxforddictionaries.com/definition/smart-arse)

I'm aware of the irony of explaining this term in this way, but I can't think
of another way to effectively do it.

------
hprotagonist
didn’t Godel and Turing and Church and Heisenberg definitively answer this
question in the affirmative for most of the first half of the 20th century?

There exist an infinite number of questions of the physical world and in math
alone that one can ask, whose answers are “we cannot know within the context
of these axioms”.

Somewhat by definition, these questions are generally somewhat uninteresting,
but there sure are a lot of them.

~~~
cdancette
This doesn't really mean there's a limit to scientific understanding, just
that there's an infinite amount of questions we can't have an answer for.

But there could be an infinite number of questions we _can_ have an answer for
as well, thus there's no limit to our understanding.

~~~
supergarfield
Something being infinite doesn't mean it has no limit (I guess "boundary"
would be a better term). If, for instance, it were impossible to answer any
question about the strong interaction, I'd clearly call that a limit to our
understanding, regardless of the infinity of things we can say about black
holes.

~~~
cdancette
I think we could understand why it is impossible answer this question, and
this would be knowledge, and maybe tell us where to look next.

------
csomar
I don't think Astronomy is easier than biology. It just happened that we were
successful to calculate some stuff, but where we successful in say: Flying a
rocket to Alpha Centauri. Curing common cold is not a calculation.

Apples and Oranges.

That being said, there is no limit to scientific understanding if we accept
mathematical models (and maybe probabilistic ones). The trouble with the world
is that we try to understand everything with our simple senses: feel of
gravity, size, picture, touch, etc...

Case in point, I was trying to explain to a friend that he should ditch the
idea of a physical atom. An atom has no shape. It has volume and mass but
that's it. It makes no sense to think that the nuclei is spheric or
rectangular. At these scales, the geometrical reality is not a reality.

But he still insisted that he wants to see what one "looks like". That's how
we make sense of the world.

~~~
analog31
Show him atomic force microscope images, or a single atom glowing in an atom
trap.

Show him how hypotheses about the shapes and sizes of nuclei are tested via
atomic spectroscopy and scattering experiments.

~~~
csomar
That's quite some stuff I'm not aware of. I know that we know the shape of the
atom by interacting with but I didn't know that we have that much precision at
predicting a something more complex.

------
everdev
The article mostly argues that some concepts are too complex for humans to
understand and that only though computers or post-human intelligence can we
discover further scientific truths.

But, the scientific process is independent of human intelligence. So, more
interesting is: given an infinitely intelligent organism, can that organism be
use the scientific process to completely understand the universe?

Or, does science have a flaw that limits it's ability to explain certain
truths?

~~~
aalleavitch
Arguably, there are already many concepts that we as a society understand
deeply but any individual human is completely incapable of grasping.
Certainly, no single human could possibly hope to understand the whole sum of
modern human knowledge; collectively we already constitute an intelligence
billions of times more complex than a single human. In fact, there's a good
argument to be made that the scientific process can't even work on the scale
of a single human because it would be destroyed by bias; it's entirely in the
notion of how you communicate and provide evidence for your theories to other
people that the objectivity in science arises.

When it comes to whether any intelligent system could understand the entire
universe, I think at some point we start reaching the level where the terms of
the question start to break down. What if we required an intelligence that
spanned the entire universe in order to understand the universe? Is there
maybe a sense in which we can say that the universe taken as a unit
necessarily understands itself already? We start to rub up against very
fundamental epistemological concepts that might not hold when we're
considering a system vastly different from a human being.

~~~
PoachedSausage
In some ways the universe is aware of itself in the form of consciousness, if
you believe its real of course.

~~~
aalleavitch
I mean, consciousness is obviously real. As conscious beings it is
fundamentally the only thing we actually know is real. The confusion there
comes from the fact that we can't prove our own consciousness to each other
and we poorly understand what actually constitutes it.

~~~
PoachedSausage
I should have clarified, I personally believe consciousness to be real and
possibly even an inherent property of the universe.

However there is a school of thought that believes consciousness to be an
illusion.

------
raverbashing
The limit to scientific understanding could maybe be described as this:

You're looking for explanations in the format P(x1,x2,x3...)->R where P is a
"function" (the modeling of your theory), x1, x2 etc are measurable factors
and R is a result

Now, you can't measure all the factors, but you can have a good set of them
that explains a result closely.

Hence the question is, are there phenomena for which you can measure all the
x's that give a good prediction for your result? (or even having good proxies
for the x's) And that's not even going into the problem of establishing the
theory in the first place

In physics you can repeat experiments multiple times, few experiments are
unethical and you can have very exact measurements.

Now compare this with medicine. Also compare the difference between general
predictions for a whole population to doing precise predictions for one
specific individual. How many x's do you thing would affect risk of
cancer/risk of cardiac disease or just the risk of a weird mutation that
changes risks slightly (and that medicine hasn't even heard about it)

There's your limit

~~~
dsnuh
I'm not a PhD, but I always like to think of this explanation when trying to
understand how progress happens. In bumps.

[http://matt.might.net/articles/phd-school-in-
pictures/](http://matt.might.net/articles/phd-school-in-pictures/)

~~~
foolrush
Likely the False Narrative myth at work in a similar fashion to attributing a
narrative structure to the universe for example.

As a counterpoint, Foucault's Order of Things offers up a historical
“archaeology” of the sciences congruent with power structures.

~~~
dsnuh
It's gonna take me bit to parse this...

~~~
ZenoArrow
Basically what foolrush is saying is that humans have a desire to understand
their world through stories, which leads us to invent believable stories even
if they don't line up with reality.

The suggestion is also made that Foucault had a different explanation for how
scientific advancements are made, and that explanation suggests that the
progress is guided by the society the scientists find themselves in.

Personally, I don't believe either is completely accurate. Whilst I agree that
storytelling is a fundamental part of the human condition, which can lead us
to jump to false conclusions, in general scientific knowledge is often built
up iteratively, we build on what we already think we understand to explore
what we don't. Society may influence which frontiers we're most likely to
explore, but you can't decide on the impact of new discoveries, and what other
new frontiers they unlock.

~~~
dsnuh
Thank you for the analysis. I suppose I need to read some Foucalt to form an
opinion. My only exposure to him is his tangential relationship to Foucalt's
Pendulum by Umberto Eco, which I read years ago.

~~~
foolrush
While most heavy rationalists insist that Foucault is full of rubbish, it is
pretty easy to see how his vantage has a significant degree of merit, even if
one entirely ignores the transmission of ideology.

If we look to "science" today, we can see that it relies heavily on funding.
In fact, so much so that "science" can't move without it in some instances. So
where we have markets and capitalism, we can see that the thrust of science is
driven by such; some "science" is crafted, other "science" is utterly ignored
or starved. So even in a superficial way, we can begin to take note as to how
our perception of what "science" is has a contextual element from within the
society it is birthed from.

Foucault's primary gist, if you can survive reading his work in English, is
that the very ontologies are fundamentally flexible and yield to the
ideological underpinnings of a given time. One must historicize to see
"science" clearly, and indeed he does just this in the work. Others have
written since then, including discussions of the recent creation of "science"
as we know it today, which isn't nearly as ancient as some would have others
believe[1].

It can be an _incredibly_ difficult tome to wade through, but one that comes
with significant reward if you manage to parse the work.

[1] [https://dangerology.wordpress.com/2013/09/13/steven-
pinker-i...](https://dangerology.wordpress.com/2013/09/13/steven-pinker-is-a-
jerk/)

~~~
dsnuh
Without trying to sound dismissive of someone I admittedly I haven't read -
this just seems really obvious to me.

~~~
foolrush
The trite example I listed _is_ obvious, yet as obvious as it is, some folks
insist that “science” is this free thinking process.

Foucault of course is much more expansive than my trivial example, covering
the notion of institutions as ideological enforcement, among other things.

No fool on Hacker News can summarize his work in a reply, and one would be
heavily encouraged to read it. It cuts right to the core of epistemology
itself.

~~~
dsnuh
Apologies if I misunderstood the example. I will be reading some of his work.
It sounds like I may be in agreement with his premise. Thanks for pointing him
out to me.

~~~
foolrush
I was an unruly youth when I first was fed him.

In hindsight, I truly believe that not only were his theories several
generations ahead of their time, but also likely a critical body of work
regarding epistemology that would relate to developments in AI etc.

~~~
dsnuh
I'm intrigued! Thanks for spreading the wealth!

------
sremani
Its not The Atlantic article, its a repost of Aeon article.

Here is the link to Original, [https://aeon.co/ideas/black-holes-are-simpler-
than-forests-a...](https://aeon.co/ideas/black-holes-are-simpler-than-forests-
and-science-has-its-limits)

------
SAI_Peregrinus
Science is a process to find objective truths. It relies upon consistency, if
a counterexample is found to a theory that theory is considered incorrect. New
theories must match all existing data. Einstein's theory of General Relativity
is a good example of this; it replaces Newtonian gravitation in extreme
circumstances but provides the exact same results in the limit around everyday
energies.

Science cannot create objective understanding of purely subjective systems.
The issues surrounding consciousness and qualia are of this sort, while
everyone experiences their reality they are inherently outside the domain of
objective truth. The closest science can get is to consider them as "emergent
phenomena" and give up on explaining the issue.

~~~
coldtea
> _Science is a process to find objective truths._

There are no "objective truths" (that's metaphysics).

Science is a process to find which models/hypotheses make more accurate
predictions and have more explanatory power.

> _It relies upon consistency, if a counterexample is found to a theory that
> theory is considered incorrect. New theories must match all existing data._

That (a popular but naive epistemology) doesn't hold with how scientific
process actually works.

New theories can have counterexamples or be unable to match all existing data
and still be successful in providing a handier model for more phenomena.
Theories might even be inconsistent with other theories (e.g. QM and
relativity) and still co-exist and successfully grow while searching for a
unification factor (whether that's successful or not).

New theories can succeed (and historically have been known to) even when not
accounting for all existing known facts predicted by earlier ones. In other
words, more explanatory power doesn't necessarily mean a complete superset.
The intersection might not cover 100%.

And that's just for hard sciences. For soft sciences, from economics to social
sciences, it's even fuzzier.

~~~
whatshisface
I think saying that science doesn't arrive at objective truths means defining
"objective" to be a word that nobody would ever use.

Give me a number, and I can put that many nines after the decimal place of
percent-certianty in my result. For each nine, I have to run my experiment
longer. Infinity nines -> complete certainty.

So, in practice, when we're choosing between fact and fiction, what we're
really doing is putting our theories into a bucket and letting them duke it
out until there's only one voice left telling us how many struts our bridge
needs to be built with. The beauty of science is that whatever poetry, rousing
manifesto, or brilliant connections you want to pit it against, you can always
look at them and order up exactly as many nines as it takes to beat them. The
certainty of science is _finite,_ but _unbounded:_ so it's the best we've got.

> _Theories might even be inconsistent with other theories [...]_

There's a nice philosophy-of-science trick you can do if you want QM and GR at
the same time. Just look at the error bars on the experiments that support the
two theories, and propagate them down into error bars on the theories
themselves. That leads to theories being statements of knowledge over
appropriate ranges (and to appropriate precisions), making them "true" in an
absolute sense. (If you do this with Newton's classical mechanics, you will
realize that QM was not an overturning but an allowed-for refinement.)

~~~
Emma_Goldman
I think the person who you are responding to took objection at a philosophical
level, namely, it's simply not the case, however robust the scientific
results, that you have direct access to the 'pure nature of the world in
itself'. That is a conceit of the correspondence theory of truth that hasn't
been thought credible since the linguistic turn. There is no thought
independent of the language we use to represent the world, and that language
is rooted in and conditioned by history. Hence Kuhn's _The Structure of
Scientific Revolutions_. Of course, that doesn't practically undermine the
value of scientific investigation.

~~~
whatshisface
> _There is no thought independent of the language we use to represent the
> world_

This is debatable[0]. Presumably, people without language can still think, and
even if you believe that they invent an internal language to think with you
are placing language into a tool-role, where it can be created and destroyed
in service of a higher goal (thought). However, that still leaves the problems
with correspondence unexplained: but hopefully I can convince you that they
are not too bad.

I'll sketch it for you:

Coordinate systems are all equally good by the measure of how well their
predictions work. They're obviously constructions - but that doesn't itself
imply that the models that use them are constructions, any more than the
wrappers burgers come in makes their meat paper. (I mean, the models might be
constructions too, but it's not the use of arbitrary coordinate systems that
makes them so.)

Specific expressions of models are all equally good by the "correspondence"
measure as well: anyone with even a grade-school math education should be able
to recognize A=BCD and I=JK,K=QP as distinct in the literal sense but
identical in some higher one. So, once again, it's obvious that we are looking
at different ways of expressing the same thing; a thing which _hasn 't yet
been proven not to exist_, even though no way of expressing it can by itself
be more than just "a different way."

So, one step above algebra and two steps above specific calculations, there's
the concepts. From a scientific perspective unless you think the brain is
supernatural it's really to be expected that ideas are no more than symbols.
BUT: for the same reasons as the two cases above, it has yet to be shown that
the concepts are not themselves dancing around a _still higher_ truth, each
(effectively true) idea differing only in the baggage of being human; that
silly but fun phrase referring to how we like symbols but are presumably using
them to _mean something_.

[0][https://plato.stanford.edu/entries/language-
thought/#ObjLOT](https://plato.stanford.edu/entries/language-thought/#ObjLOT)

~~~
coldtea
> _This is debatable[0]. Presumably, people without language can still think,
> and even if you believe that they invent an internal language to think with
> you are placing language into a tool-role, where it can be created and
> destroyed in service of a higher goal (thought)._

That's a moot point, I think, as it can be argued easily that thought is a
language itself, or requires one (whether it's a human language like english,
or a language of symbols, or some other form of structured description of
events and thoughts, even the structure just happens at the chemical level in
the brain).

> _Specific expressions of models are all equally good by the "correspondence"
> measure as well: anyone with even a grade-school math education should be
> able to recognize A=BCD and I=JK,K=QP as distinct in the literal sense but
> identical in some higher one. So, once again, it's obvious that we are
> looking at different ways of expressing the same thing; a thing which hasn't
> yet been proven not to exist, even though no way of expressing it can by
> itself be more than just "a different way."_

It's obvious in these examples -- which, coincidentally are all from math
(trigonometry, coordinate systems), that is the quintessential non-
empirical/reality-based domain. It's easy to find isomorphisms like that in
math, since that's inherent in their core purpose.

It's not obvious (or true) for the general case, talking about the outside
world.

------
ranprieur
> we'll reach the limits of what our brains can grasp.

But different brains are good at different things. There will never be a line
where everyone can grasp everything on one side, and no one can grasp anything
on the other.

If one in ten brains can grasp something, there might still be academic
departments that study it. But if only one in a thousand brains can grasp
something, what do these people look like to the rest of us?

~~~
jobigoud
What if the concept you need to grasp requires you to hold a extremely high
number of "gateway" concepts simultaneously in your short term memory? There
ought to be a limit at which point no humans can grasp the concept. And there
would still be a lot of unexplored ideas on the other side.

------
LeifCarrotson
The article and comments mention possible limits of understanding due to
limitations on working memory: there could be a problem with so many moving
parts that a human cannot keep all of them in consideration at one time.

But people solve problems with huge numbers of moving parts all the time by
breaking them into smaller modules.

I wonder if the actual limit is due to human lifespans instead. There are
certainly problems and sciences and systems that are understood through 12
years of primary education, followed by an undergraduate degree, increasing
specialization in a masters degree, specialization and new discoveries beyond
that in a doctoral degree, and then postdoctoral studies and professorial
research far into the middle and end of a human lifespan.

Perhaps there are problem domains which require 100 years of study amd
education to comprehend before useful new research can be done.

~~~
the8472
I think we're a long way off before reaching that limit. In research
fundamental discoveries can take a genius many years of his career, but once
it has been discovered the basic theory can often be taught an undergrad
within a few months and adjacent discoveries will improve our understand and
things easier to explain. Textbooks explaining it in 5 different ways will be
written and our tooling improves etc. etc.

You don't have to be fleming to grow penicillium, you don't have to be
einstein to calculate relativistic corrections to GPS satellites.

Eventually such diminishing returns might be reached if everything remained
the same, but you also have to consider that IQ and lifespans are slowly
ticking upwards over the decades and eventually we may be able to build a
better researcher.

------
cossatot
I think an additional limit is data. Particularly for sciences with strong
path dependence, such as ecology, cosmology, evolutionary biology and (my
field) geosciences, time has simply erased nearly all of the record of
previous events. This is both a big problem (it is quite frustrating) and an
opportunity, as we progress through both discovering new records of the past,
and developing new techniques for analyzing existing records through new or
improved proxies, instrumentation, or theory. I think it’s obvious to the
practitioners that we will never know the vast majority of what we want to
know, but we can learn enough of what we need to know to keep moving foreword.

~~~
sgt101
I think that cossatot is saying exactly the opposite of what Borges was making
fun of.

------
dandare
I always wondered what if comprehensibility of physics is like a game of
minesweeper - you solve a problem and you get clues that help you solve the
next problem that gives you more clues. Sometimes you get stack and you have
to continue on some other side of the maze that ultimately brings you back and
helps you solve the stacked position.

But sometimes the game of minesweeper is unsolvable, you are stack and the
clues you got so fat are not enough and you will never get new clues. Can this
happen in reality? I don't know. To make things worse, we will never be 100%
sure that we didn't miss something, if we get stuck we will never be sure.

~~~
jopsen
And sometimes you hit a mine, how like sometimes in physics you die from
accidental radioactive poisoning :)

------
ackfoo
Martin Rees is very inventive: he postulates limits to scientific
understanding and immediately demonstrates them, using himself as the
exemplar.

Rees states, "Big things need not be complicated either. Despite its vastness,
a star is fairly simple—its core is so hot that complex molecules get torn
apart and no chemicals can exist, so what’s left is basically an amorphous gas
of atomic nuclei and electrons."

OK, Martin, in that case I expect your prediction for the exact number and
distribution of sunspots on Sol for every day of 2018 on my desk in the
morning. A star is fairly simple, no? So what's the holdup?

Stars are complex; Martin Rees is simple.

~~~
whatshisface
Your sunspot example is mostly determined by convection in Sol's outer layers,
not the core. Stellar cores are usually in very tight equilibria.

------
EGreg
There is a similar thing in mathematics.

The Appel and Haken proof of the four color theorem was the first computer-
aided proof of a major theorem.

As such, we aren't sure _why_ the theorem is true, but we are sure that it is.

An interesting question is about the logical necessity of a mathematical
theorem being true if we require a computer to interpret it.

After all, we require people to interpret smaller logical statements, too!

Stephen Wolfram'a "New Kind of Science" was all about that stuff. Did it ever
reach any _conclusions_ though? Or just "look how cool complexity can be?"

------
asimpletune
It's kind of like the argument against trying understand the brain by modeling
one: ok, so let's say you have a perfect model, all that's left is to
understand your model.

Or another way I've heard it explained. If we were trying to understand Mario
and could perfectly emulate its circuit, would that translate into us
understanding the concept of saving the princess? Probably not, we'd probably
venture a guess that collecting coins is the most important thing in the game.

------
gniv
I was disappointed that the article doesn't mention machine learning. To me
it's the perfect example of both the limits of our understanding, and how we
can overcome them: Even though we don't understand how AlphaZero learns to
master the game, we can use it to do so.

~~~
monocularvision
Several times the article discusses using computers to further our
understanding.

------
acobster
"Science is not and cannot be a quest for a complete knowledge of the
universe. Rather, it is a process whereby certain information is selected as
being more relevant to human aims and understanding."

\- William Poundstone, from _The Recursive Universe_

------
oldandtired
There are many interesting comments pro and con the the question. Over a
recent period of time, I have had a good and thoughtful discussion with a
nuclear physicists and educator about models in science. He did challenge me
to look further into the subject, which I have.

My statement to him was that "All models are wrong, but some are useful".
Since that discussion, looking into mathematics , experimental evidence,
various opposing theories about particular areas of science, I am now more in
agreement with the statement "that all models are wrong, but some are useful".

Our understanding of the universe around us is not only limited, but, we
humans (all of us) get caught up in the dogmatic belief of our underlying
premises. This works to limit our understanding about any area.

The tools we use to do the investigation of the universe around us are just
that tools. Whether we use machine learning and computers or space probes,
telescopes, high energy colliders or electron microscopes, rulers and meters,
these tools are only useful to the extent that we do not ignore the
experimental results.

In it very obvious that many (scientists) ignore questions that challenge
their "pet" models. There are no stupid questions, just inappropriate answers.
There is much data that has been collected that does not fit the standard
consensus models and thinking.

Whether or not an alternative model is useful, is not the problem here. It is
the immediate response of anger that you should question the "dogma" that, in
the end, limits our ability to further our understanding of the universe
around us.

I, personally, do not believe in major standard models in use today. I find
that there are significant problems with those models in explaining the world
around us and leave too many things ignored. Do I have an alternative model -
No. But that's okay. I am not required to supply an alternative model, I just
have to point out inconsistencies in the models and evidential data that is
being ignored.

Far too often, reputations are considered more important than inconsistencies.
Consensus more important than problems in the models. If a question is raised
that disputes a particular model and it cannot be answered simply then it
might be a good idea to look for an alternative simpler explanation.

There are ideas that are completely off the wall, but if investigated
properly, we can discount appropriately and in that investigation we may find
further areas for study that would not arise otherwise.

Science should be fun.

------
pteredactyl
Can science answer what happens after death?

------
MechEStudent
Yes. The universe has a "Johari" window, that is an implication of Godel's
incompleteness theorem.

~~~
neuralzen
Glad someone mentioned this, it was the first thing that came to mind.

------
Koshkin
Why should there be a limit? Excluding cases when information is lost (or is
somehow inaccessible in principle), everything can be investigated, learned
and understood - given enough time and effort.

------
denzil_correa
Yes, indeed there is a limit. However, this line of limit is the optimal line
we (currently) have between "understanding everything" and "accepting every
claim made".

------
Manglano
Tarski's undefinability theorem.

