
Networks are Killing Science - Anon84
http://www.scientificblogging.com/adaptive_complexity/networks_are_killing_science
======
anigbrowl
Linkbait is killing journalism.

I don't know whether an editor or the writer himself is to blame, but the
headline pretty much contradicts the article's thesis: that because network
techniques are not tested in accordance with normal scientific method, the
claims of their proponents are unverifiable and ultimately such techniques
will be limited by their inability to prove or explain the phenomena they seem
to expose and may even be dismissed as a high-level form of paraeidolia.

 _Here's a prediction of my own, one that I'm willing to put to the test: if
complex systems researchers don't get serious about the scientific method,
their field is going to fizzle out, if not crash and burn._

Doesn't sound too worried about the future of 'proper' science, does he? I
think he's right, insofar as non-scientists are deeply suspicious of any kind
of modeling (see the climate change debate for example), so using controls,
double-blinds, and so on are the best way to advance this empirical technique.

~~~
apotheon
There's a huge potential flaw in your prediction: you assume that the true
efficacy of the techniques they use will determine the survival and popularity
of the research techniques. In the cases indicated, the people deciding
whether to continue funding the research are _not_ scientists who understand
causal relations worth a damn, and they're very susceptible to spin (in the
political sense of the term "spin", of course).

The saddest thing about it is that I'm positive there's something to the
theories involved, but they're being squandered by people more interested in
the popular _appearance_ of success than they are in true efficacy -- possibly
because they ego-identify with the successes of their pet projects, and ensure
that they don't pay enough attention to the right factors to recognize the
difference between appearance of efficacy and true efficacy.

~~~
mbergins
Perhaps I missed this in the article, but in what "cases" are the people
deciding whether to continue funding complex systems research not scientists?

My guess is that a majority of the research into complex systems is funded by
the NIH and the NSF. Both of which make funding decisions largely based on the
opinions of other scientists.

------
hristov
I agree. I think the underlying issue here is that computer models are given
way too much credit. A computer model is only as good as the software it runs
on and the assumptions about the real world that this software makes.

If as scientist, you want to test assumptions, it is better not to rely on
complex computer models that hide all their assumptions under the hood, but to
make simpler or as simple as possible theories that can test assumptions one
at a time. If you know all your assumptions are reliable then you can cobble
them all together in computer models. And of course keep everything open so
that others can check your work.

One of the most obvious and disastrous examples of failure of computer models
is the economic crash that just happened. A bunch of bankers and rating
agencies had a lot of fancy complex computer models that proved that their
mortgage backed securities could not possibly lose value. So the securities
were rated AAA, and everyone treated them as essentially risk free. Well we
all know what happened next.

~~~
yummyfajitas
The economic crash is not a failure of computer models or even complex models.
It's a failure based on a single simple assumption: house prices never go
down.

------
psyklic
The author seems to be confused. Scientists do try to validate their models
(even the sniper author, via correlation). They _do_ understand that their
model must predict the future ... however, in some fields (like preventing
sniper attacks, psychology) there are many external variables, so it is
difficult (if not impossible) to prove that the model is exactly correct.
However, that does not mean that the results are not on target or that we
cannot learn anything from the study ...

If anything else, no one will take a scientist's model seriously unless they
use it to predict future events accurately -- this is a fundamental
requirement of a model, after all.

~~~
KevinMS
"If anything else, no one will take a scientist's model seriously unless they
use it to predict future events accurately -- this is a fundamental
requirement of a model, after all."

Except when it comes to many many people trying very hard to spend trillions
of dollars to fight something that is only predicted in models (failed models)
- global warming.

Here is a link to Freeman Dysons discussing this subject:

<http://www.edge.org/3rd_culture/dysonf07/dysonf07_index.html>

~~~
psyklic
When politics get involved it gets ugly. Special interest groups always try to
confuse the public over results which would have otherwise been accepted by
the scientific community. This causes the public to form opinions on things
which are not based in science but in emotion. For example, global warming,
evolution, ex-gay therapies, etc.

Global warming is definitely scientific fact, but the issue here is moreso how
much money do we need to spend to mitigate the (still somewhat unknown) risk.

~~~
apotheon
> Global warming is definitely scientific fact, but the issue here is moreso
> how much money do we need to spend to mitigate the (still somewhat unknown)
> risk.

I think there's something to the indicators that global temperatures have been
rising, of course, but the "science" involved has been so damned sloppy that
we can't even get credible estimates of past trends, let alone credible
predictions of future trends. There's a lot more at issue than "how much money
do we need to spend". Hell, there's even significant doubt about the question
of whether what is currently occurring is inconsistent with natural cyclic
trends, and many temperature sampling sites have been found to be compromised
by virtue of placement in the middle of parking lots and the like.

Get me some figures that don't look like they were compiled by twelve year
olds, then we'll talk.

------
Perceval
This article is about the 'inductive fallacy,' more or less. Karl Popper wrote
a long work (The Logic of Scientific Discovery) on the scientific method,
putting forward the criterion of falsifiability as an alternative to the
positivist approach of model-building.

~~~
fburnaby
I have gotten into doing this sort of work (the scientific computer modeling
part, not the "cargo-cult" interpretations thereof). Popper was the first
person I went back to reread when I got into it. This is very important to
understand (and so simple).

However, I disagree with the author that this type of work could be
dispensable. You have to test your theories in order for them to be
scientific, but you have to know exactly what experimental results your
theories predict. These complex systems are too important to stop studying,
and they're too complex to use "simpler" methods.

------
Maro
In my experience, the best down-to-earth definition of science, referring to
the process that the scientific community collectively performs, as measured
by conference talks, proceedings and journal articles is: Science is anything
which helps scientists think about their field in new or improved ways; with
the comment that the ultimate goal (in the natural sciences) is to arrive at a
testable theory.

The intermediate steps very often do not contain anything testable or
repeatable.

------
hc
this researcher seems to take it for granted that there is something
fundamental/sacrosanct about the "scientific method", but there isn't; it is a
learning algorithm whose role is being diminished by new, more powerful
learning algorithms.

~~~
caffeine
That's actually not true. There _is_ something fundamental about the
scientific method. Namely, it's _the_ learning algorithm; the only one that
actually actually works (so far). Everything else is just variations on or
approximations of this algorithm.

When you build a model and then do science (i.e. Hypothesis/Test/Explain loop)
on the model instead of doing science on Nature, you learn a lot about your
model. But you must be honest and admit that you have learned _nothing_ about
Nature. What you _might_ have done is come up with hypotheses. But it's not
enough. It's not science yet - it's just a complicated hypothesis. It doesn't
count until you test it on Nature.

The point (and it's an obvious one, well-understood by most complexity
researchers) is that your models need to enable you to _falsify_ them by doing
real, repeatable experiments on Nature, not on models. There's no point
building a model of a complex system if the only experiment that can falsify
it is so complex that it can't be done.

A lot of time is wasted this way - in many ways it's the nature of the beast
(it's called "complexity" for a reason). Nonetheless, this is frustrating to
scientists who think the resources spent on learning facts about models might
be better spent on learning facts about Nature - the more radical likely being
willing to trade _all_ the modelling (and modelers?) for even a single
properly verified fact.

~~~
hc
when you train a computational model on half of your dataset and test it
against the other half (which the author claims is Not Sufficiently
Scientific), how is that qualitatively different from training it on all your
data, and then "testing it on Nature" by gathering some more and checking your
results? i am reminded of searle's chinese room argument.

> this is frustrating to scientists who think the resources spent on learning
> facts about models might be better spent on learning facts about Nature -
> the more radical likely being willing to trade all the modelling (and
> modelers?) for even a single properly verified fact.

there is no getting rid of models. neither science nor any other human
activity i am aware of is capable of verifying facts about the physical world.

~~~
roundsquare
>when you train a computational model on half of your dataset and test it
against the other half (which the author claims is Not Sufficiently
Scientific), how is that qualitatively different from training it on all your
data, and then "testing it on Nature"

it would be essentially the same thing if you took half the data, made a
model, tested it on the other half, got good results, and stopped.

but thats not going to happen. your fist model will stink. you'll refine it
and try again. and you'll keep doing that. unfotunatly, this mean your model
ends up being dependent on all the data.

the easiest way to prevent this is to build a model and the test it on real
world data that occurs after you build the model. if it doesn't work, tweak
the model, and then find new data again.

~~~
hc
> your fist model will stink. you'll refine it and try again. and you'll keep
> doing that. unfotunatly, this mean your model ends up being dependent on all
> the data.

this is exactly what the scientific community as a whole is always doing.

