
Going Critical - bkudria
https://www.meltingasphalt.com/interactive/going-critical/
======
hirundo
This essay feels like an instant classic. It's a very thoughtful blend of
prose and code, in service of teaching some wildly relevant lessons about
networks.

One eye opener is the extent that the "Degree" parameter reduces the critical
threshold.

> The degree of a node is the number of neighbors it has. Up to this point,
> we’ve been looking at networks of degree 4. But what happens when we vary
> this parameter?

And here's the power of this interactive essay, you can try it yourself. It's
a toy model but makes a visceral argument. It adds up to the kind of media we
dreamed of for the world wide web.

The degree parameter has exploded via social networks, greatly lowering the
critical threshold of idea transmission. Our cultural DNA is being revised at
a supercritical pace. This piece helps make a little sense of it in a way that
static words couldn't.

~~~
atemerev
Real networks are not lattices; they have varied degree distributions. The
famous result of Marián Boguña and Alessandro Vespignani shows, that for
scale-free networks (which closely resemble the networks we see in the real
world), the epidemic threshold could be arbitrarily low:
[https://arxiv.org/pdf/cond-mat/0208163.pdf](https://arxiv.org/pdf/cond-
mat/0208163.pdf).

I have also published some research among these lines:
[https://arxiv.org/pdf/1403.5815.pdf](https://arxiv.org/pdf/1403.5815.pdf)

~~~
antt
Real social networks aren't even networks, they are simplexes.

~~~
arcticfox
Why are they simplexes? I'm reading about simplexes, and having a hard time
visualizing how they map to social networks

~~~
antt
The hyper volumes between vertices can have single weights. Say you have a
pump that is contaminated with cholera. You can either model the pump as a
node on the graph with every person drinking from it being connected by an
edge, allow an arbitrary number of edges between any two nodes, or allow for a
relationship that can connect an arbitrary number of nodes.

I built a rather sophisticated simplex based trade analyser for one of my
contracts for a broker trader. From what I've heard it's given them an edge
since no one even knows about it. It's been three years so my NDA and non-
compete are finished. I might get around to writing it up if I don't get hired
to do another one.

~~~
wnkrshm
With hypervolumes you mean all the (n-1)-dimensional 'faces' (n > 1) formed by
a vertex and its neighbours in an n-dimensional simplex or a mesh of those? To
assign unique weights to all interactions between a vertex and up to n-1 of
its neighbours, I assume.

~~~
antt
Yeah. And because they are sparse the representation is both tiny and stupidly
powerful.

~~~
kian
Would you care to share some pointers to materials where one could learn about
simplicial representations of things traditionally modeled by networks?

~~~
antt
There aren't any I could find, I had to do everything from first principles
and the notebooks were left with my employer.

------
lifeisstillgood
"Instant classic" is exactly how I felt - I "knew" all the parts of the
article but it has a new context and direction that gives a moment of clarity.

Cannot recommend this enough.

And on a slightly related note a recent Talking Politics podcast had Sir David
King who was UKs chief Scientific Officer to Blair. Early on there was a
terrible Foot and Mouth outbreak and Blair was at a loss how to prevent the
outbreak spreading from farm to farm. But King understood SIR / SIS networks
(and experts in this who wrote the books) and said "give me carte blanche and
we will fix this - by day X we will see a tipping point" And the Army shut
down every farm, and on day X the infections stopped and he had sufficient
political capital to push hard on things like Paris climate treaty (which UK
has a lot of impact on)

In other words understanding this article lead to a global first on CO2
reduction.

Science Works Bitches

~~~
joshdance
link to that podcast?

~~~
rintrah
Here you go:
[https://www.talkingpoliticspodcast.com/blog/2019/160-david-k...](https://www.talkingpoliticspodcast.com/blog/2019/160-david-
king-on-climate-repair)

------
ChuckMcM
One of the more influential papers on my early thinking of distributed systems
was the Xerox paper on Grapevine, their distributed naming service that used a
viral model for propagating updates across their networks.

The most interesting part was transmitting 'dead' information. In the linked
article things are alive or not. Now expand this article so that you have a
series of things that are being transmitted (maybe blue, red, and green dots
to represent each). And then you want to 'kill green' so that green is no
longer considered a valid state. Networks need to retain the notion of a
tombstone (in epidemiology a residual antibody) to quash flare ups of
previously killed information (which can happen when a node that has been down
for a while rejoins the network).

It makes for some great coding exercises!

------
alan-crowe
You can use the concept of criticality to revise and revive the Sapir-Whorf
hypothesis.

In the blunt form: language limits what we can say; Sapir-Whorf runs contrary
to every-day experience. Sure, if ones native language contains le mos juste,
it is easy to speak ones mind. But if not, the burden is not great. One must
speak at greater length, using more words, and forming the intersections and
unions of their meanings, to obtain the exact nuance that you intend. This is
the routine craftsmanship of every wordsmith.

Early in the essay Kevin Simler posses a challenge "Here's an SIS network to
play around with. Can you find its critical threshold?" What is most
interesting is not the numerical value, lets just call it x. What is most
interesting is that it is fairly sharply defined. If two idea are both fairly
hard to transmit, and hence both close to x, we could easily have a situation
in which the burden imposed by a missing mos juste makes all the difference.
One idea has a transmissibility just above x and becomes an established staple
of the culture. The other idea has a transmissibility just below x so it crops
up from time to time but always dies out.

One looks around, admiring the cultural landscape. One idea is present, one
absent. Why? Language! While it is wrong to claim that "a person's native
language determines how he or she thinks", we have to take account of network
criticality.

The much weakened Whorf-style claim that "a person's native language burdens
their communications with trivial inconveniences" is plausible and unimportant
at the individual level. But we may never-the-less find that "a social
network's native language determines which thoughts die out and which ones
take over most of the network."

Compare and contrast with Beware Trivial Inconveniences
[https://www.lesswrong.com/posts/reitXJgJXFzKpdKyd/beware-
tri...](https://www.lesswrong.com/posts/reitXJgJXFzKpdKyd/beware-trivial-
inconveniences), which claims that trivial inconviences have real world
potency without needing the leverage provided by network effects.

~~~
TeMPOraL
Honestly, weak form of Sapir-Whorf absolutely does reflect my every-day
experience; I figured it out way before reading about it, by pondering my
"inner dialogue" \- I run it bilingually, switching from English to Polish and
back on sub-sentence level, always using the language that makes it easier to
think a particular thought.

> _But if not, the burden is not great. One must speak at greater length,
> using more words, and forming the intersections and unions of their
> meanings, to obtain the exact nuance that you intend. This is the routine
> craftsmanship of every wordsmith._

This is a nice way of putting it, but I question how "easy" and "routine" it
is. People _can_ do this, which is why strong form of Sapir-Whorf sounds too
strong, but it's not free - and like "Trivial Inconveniences" article shows,
that's enough for it to not be done, especially if alternatives like "picking
up a similar but not-quite-right word" or "not thinking the thought at all"
exists.

I feel this could be especially impactful on imagination (the problem-solving
kind), which can be viewed as a randomized reverse-lookup[0]. The brain
suggests you things connected to what you're thinking about, and - at least in
my experience - they usually come up as _words or phrases_. If you don't have
a word for a concept, you may not think of that concept, and concepts related
to it. Not that you _couldn 't_ think of it, just you usually and initially
won't.

One could think of language as a cache of those "intersections and unions of
meanings" that have proven themselves to be useful. Viewed like this it's an
optimization trick, but we observe that everything we do and think is time and
energy-constrained, so such optimizations can be the difference (especially on
a population level) between how precisely you think a thought before you
accept it as "good enough".

\--

[0] - Meta: the way I figured out this idea actually involved the brain
suggesting me the word "reverse-lookup", and me going out from there. My
native Polish language doesn't have a word for "lookup", and especially
"reverse lookup", so I wonder what would I came up with if I didn't know
English?

------
nicklaf
I am struck in particular by the 'expert --> idea' simulation. It suggests
that an effective strategy for beating the competition to the punch in making
a breakthrough discovery is to concentrate a diverse collection of expertise
(it also explains why it pays to be very social in your career as a
researcher).

As mentioned in the article, putting specialists together in the same room is
one way to accomplish this, but I can imagine the same happening in the mind
of a single polymath, who, though perhaps being mediocre in several subjects,
connects enough dots to beat the competition to combining them in a novel way.
It might also make sense to recruit a few such polymaths/generalists to be put
in your room of distinct experts, since they might serve well as a sort of
'interconnect bus' between them.

~~~
rhizome
I think the conventional term for that strategy is "the creative process."

~~~
nicklaf
Fair point! :-) What can I say, my training is in math, where we take great
pleasure in squinting at obvious things until we feel insightful.

~~~
rhizome
Positively pleonasmic! :)

------
SirLuxuryYacht
I'll echo a lot of the praise about the format of this post. It was fun
clicking and watching the figures animate as I changed parameters.

I was sad that the author missed a number of chances to get into more detail
about the classic reaction-diffusion problem [1]. I was reminded of a small
project I did which produced similar animations, though with periodic boundary
conditions, for learning about the Gray-Scott model. These websites are pretty
helpful [2][3].

I haven't ever taken a class on systems so I don't know, but after reading
this I wonder if the propagation of "scientific bullshit" and "truth" through
a network can instead be modeled chemically as in a reaction-diffusion model.
The last figure shows real knowledge fizzling out because it turns fake. It
also lacks a slider so I can't play with the parameters but there should be
some point where they oscillate back and forth, i.e. a Hopf bifurcation or a
Turing bifurcation. Adding a bit more complexity might add some more depth to
this post. I hope there will be a sequel!

[1]
[https://en.wikipedia.org/wiki/Reaction%E2%80%93diffusion_sys...](https://en.wikipedia.org/wiki/Reaction%E2%80%93diffusion_system)
[2]
[https://groups.csail.mit.edu/mac/projects/amorphous/GrayScot...](https://groups.csail.mit.edu/mac/projects/amorphous/GrayScott/)
[3] [https://www.karlsims.com/rd.html](https://www.karlsims.com/rd.html)

------
throwawaymath
Is the SIR model a Markov model? The article doesn't mention whether or not
infection can be modeled as a Markov process, but based on the graphics it
looks like it.

If I understand correctly the probabilistic infection rate is "history-less";
in other words, the probability of infecting an adjacent neighbor in the
current state is not determined by the state transitions of any previous
iterations.

It looks like you could model this naively with a discrete time Markov chain
using a 3x3 stochastic matrix and three states: healthy, infected and
deceased. I would guess you could do the same thing for the SIS model using
states susceptible and infected with a 2x2 stochastic matrix instead.

In either case, modeling the epidemic as a Markov process would let you
estimate the probabilities of criticality using the limit of the stochastic
matrix. In fact, I think the critical threshold (probability of the epidemic
going critical) will be given by left multiplying the initial probability
vector by the limit of the stochastic Markov matrix.

~~~
soVeryTired
> It looks like you could model this naively with a discrete time Markov chain
> using a 3x3 stochastic matrix and three states: healthy, infected and
> deceased.

Diffusion is a Markov process, yes. But you'd need three states _per cell_
(not sure if that's what you meant by three states).

------
yayitswei
If you liked this article, here's another one about the spread of information
in social networks: [https://ncase.me/crowds](https://ncase.me/crowds)

------
soVeryTired
This is a really interesting topic and the article is nicely written.

What bothers me, though, is the effort to link the mathematics to 'real-world'
applications. I agree that forest fires, disease outbreaks, and the spread of
ideas _might_ be good candidates for this sort of modelling. But I think you'd
need an awful lot of solid evidence to back that up.

~~~
crazygringo
Without the real-world applications a lot of this could seem dry and boring or
irrelevant.

Many years ago I went through teacher training, and one of the biggest things
you learn is to always make material _relevant_ to students by linking
abstract concepts to real-world applications they actually care about.

It is true that in writing an article like this, you need to be very careful
with your wording to distinguish between things that "appear like", "are
similar to", "suggest", or even "is a first-order approximation of", versus
stating that this _is the_ model of epidemiology, forest fires, etc. (which
needs citations, etc.). But in this particular article, the examples seem
fairly straightforward as first-order approximations -- curious what sentences
you're specifically objecting to?

~~~
soVeryTired
> curious what sentences you're specifically objecting to

Lines like this: _If we 're simulating the spread of measles or an outbreak of
wildfire, SIR is perfect._ The author could have said "this kinda-sorta looks
like it might be useful in simulating wildfire but I haven't checked", but of
course that would be less convincing and less _exciting_.

Which is fine as far as it goes. The problem is what happens when someone
takes one of these models seriously without actually checking the details, or
without being qualified to check them.

~~~
wDcBKgt66V8WDs
I think the author does a pretty good job of highlighting that there are
imperfections and shortcomings while eliding over those details. They may have
slipped up in some places such as your quote but I'm definitely not concerned
of anyone taking any of this too seriously and extrapolating on it
dangerously.

~~~
soVeryTired
> I'm definitely not concerned of anyone taking any of this too seriously and
> extrapolating on it dangerously.

Not this particular article, no. But there's a whole genre of excited
mathematical modelling literature where the author demonstrates a gee-whizz
concept that looks like it _could_ be really useful. The trouble is that once
you start digging down into the specific details, they turn out to be _really
hard_ to get right, and at best you end up with a model that's brittle, for
want of a better word.

An example that I have in mind is the literature on power law distributions. A
little bit of theory showed how power law distrbutions could arise via a
process known as preferential attachment, and everyone got excited and
suddenly people were spotting them everywhere. The literature on this topic is
massive.

The thing is, it turns out that it's quite hard to check that a given dataset
follows a power law. This paper [0] showed that many of the claims were
sloppy, and the researchers hadn't been careful with their statistics.

The crux of what I'm saying is that establishing that a model fits well is
_hard_ , whether it's a diffusion model, a power law distribution, or anything
else. If someone wants to claim that some mathematical widget can be used to
model X, they'd better be able to back that claim up with a real demonstration
and carefully laid-out details. Otherwise they're just waving their hands in
the air.

[0]
[http://www.cse.cuhk.edu.hk/~cslui/CMSC5734/Clauset_Shalizi_N...](http://www.cse.cuhk.edu.hk/~cslui/CMSC5734/Clauset_Shalizi_Newman_09.pdf)

------
jph
Fantastic blog post! The interactive demos show network effects, including
spreading activation, diffusion, and game-of-life style movements.

The real-world examples of epidemiology and city technology expertise are
perfect IMHO. Kudos to the author!

------
perlgeek
Very cool!

As a small translation aid, when physicists talk about criticality, they tend
to talk about "dimensions" instead of "degree".

In a one-dimensional system, a line, you can have at most two nearest
neighbors, in a two-dimensional system 4, 3D has 6 neighbors, and so on.

Physicists have no trouble talking about fractional dimensions either, which
can be realized in surfaces, fractal-like substances and so on.

Dimensions higher than 3 are achieved when interactions between non-nearest
neighbors are relevant.

~~~
atemerev
While these examples use lattices for simplicity, the terminology comes from
network science and graph theory, where there are degrees and connections, and
"dimensions" mean something else (multi-dimensional networks is a rich theory
on its own).

------
lopsidedBrain
If you want to learn the underlying math for calculating these, I highly
recommend Robert Gallager's "Stochastic Processes".

It was one of the more eye-opening math books I ever read.

------
davidgl
I can't recommend his book with Robin Hanson, The Elephant in the Brain,
enough, it will change the way you view the world forever

------
jgwil2
For more on percolation, the first exercise in this course is pretty cool:

[https://www.coursera.org/learn/algorithms-
part1](https://www.coursera.org/learn/algorithms-part1)

------
your-nanny
Very nice.

I do not believe the spread of knowledge works this way, however. For one
thing, knowledge is productive. Consider a population of agents each having a
body of knowledge some kind of prolog-like set of facts and implications.
Clearly the receipt of the same new fact p by different agents will have
different logical closures. If for some agents but not others there is the
fact p |= q, then the spread of q might appear to work quite differently than
the epidemiological model.

------
neogodless
Of course I used the comments on PhDs to tease my friend. In reality, this
article, and my friend (advancing use of VR, etc in education) are two great
examples of using technology to discover and spread knowledge. Of course,
network effects/diffusion are critical for the spread of knowledge. In both
cases, by using accessible technology and by making knowledge
digestible/approachable, you can increase the transmission rates.

This article takes a complex set of ideas and presents them by using
appropriate symbols, building a base of knowledge, and then adding to it
incrementally. I thoroughly enjoyed it, and I hope many of you take the time
to enjoy it as well.

(One final thought - one of my early JavaScript projects a couple decades ago
was to write a simulation like this using Conway's Game of Life and some GIF
files I created in MS Paint of red/blue dots, two types of trees, or the
front/back of pigs. So there's some nostalgia boosting my enjoyment here!)

------
kibwen
See also this recent Numberphile that considers the eventual guarantee of
extinction based on the rate of reproduction (using the example of surnames)
[https://m.youtube.com/watch?v=z34XhE5oRwo](https://m.youtube.com/watch?v=z34XhE5oRwo)

------
johnsimer
FWIW, regarding his analysis on academia being important, I remember seeing an
analysis that research has the most positive externalities out of any
industry, i.e. more than healthcare and education

------
bovermyer
Oh man. I needed this.

I'm working on a map generator, and the previous version generated land by
placing tiles at random. I implemented the SIR algorithm, and now the map
looks much more like an actual map.

------
Arbalest
I quite like the way in which it shows the similarities between infections and
knowledge. It has a model that explains how self interest hinders knowledge in
more ways than simply reducing the resources available for research. It also
explains ideas spread like viruses generally, and reminds me again of the idea
of the mind virus.

------
galaxyLogic
What does this say about a network such as Hacker News?

------
r34
Anyone can recommend JS toolset for crearting essay like this? I can see React
components used, but maybe there is something less complex?

~~~
gnomewascool
I haven't used it, but there's idyll[0], which was used, among other things,
to make an excellent interactive article about the JPEG format[1].

[0] [https://idyll-lang.org/](https://idyll-lang.org/)

[1] [https://parametric.press/issue-01/unraveling-the-
jpeg/](https://parametric.press/issue-01/unraveling-the-jpeg/)

------
shry4ns
Great, great article. Very informative and interactive. I loved learning about
networks, and everything made sense to me intuitively!

------
Sevhall
Absolutely loved the interactive part of the article as a none science / math
person. food for thought and deeper research....

------
HNLurker2
Nice to see a melting asphalt classic blogpost

------
emilfihlman
Why is the percolation threshold lower than 25% for a square lattice?

~~~
croddin
Each node has a link to itself and can re-infect itself on the next iteration,
so a node has 5 different nodes it can infect in the next iteration rather
than 4.

~~~
emilfihlman
Hmm, so is the critical rate 22,5 then for between 20 and 25.

------
mellavora
should add a reference to this
[https://www.nature.com/articles/srep08665](https://www.nature.com/articles/srep08665)

which presents the first method which accurately quantifies the spreading
power of all nodes in a network.

It's based on applying statistical physics to diffusion over a network, which
is why it outperforms all prior approaches such as degree/k-shell/page rank/
etc etc.

------
skunkworker
This is an awesome interactive demo. In my head I never really connected the
spread of wildfires to vaccination and how similar their network effects can
be.

------
swah
So a few anti-vaxxers aren't really that damaging and should be left alone...

~~~
MaxBarraclough
You're trolling or joking, I presume, but on the off chance you aren't, do
please explain your reasoning.

