
A Few Rules for Predicting the Future (2000) - DyslexicAtheist
https://web.archive.org/web/20150219020855/http://exittheapple.com/a-few-rules-for-predicting-the-future/
======
roenxi
> The kids I was talking to were born around 1980, and one of them spoke up to
> say that she had never worried about nuclear war.

A related trick for predicting the future; things that everyone is worried
about are likely to get solved. It is the stuff that people dismiss as
impossible that is scary, because when it happens everyone is caught out in
the same way at the same time.

That also leads to a decent heuristic for figuring out if someone is
overconfident: listening to see if they can articulate a thoughtful list of
things that could go wrong. If they don't have such a list then it is likely
they won't have countermeasures in place when something goes wrong and will
fail for no good reason. Or worse, they won't pick up on the early signs of
failure when there is still time to react and salvage the situation.

~~~
mrtksn
Dilbert’s creator Scott Adams has a rule for that: Adams Law of Slow-Moving
Disasters.

Scott Adams himself would use this to mock people who are worried about
climate change.

The problem with this is that if you don’t worry about it, actually it doesn’t
get solved. For example everyone was worried about the Y2K bug and nothing
happened because enough people in the right position got worried enough,
invested in the fix and nothing happened.

So it’s a paradox. It’s also a common sense: If you prepare for something, you
can prevent it, and if it materialises the damage inflicted would be minimised
and if you were not worried about it, disaster happens.

Actually, the pandemic is an example of not worrying enough. A lot of people
couldn’t got worried until corridors of hospitals become morgues. Months of
news and footage were not enough to convince people to worry enough to wear a
mask.

~~~
kybernetikos
The blog post where he talks about slow-moving disasters has this gem:

> When I was a kid, it looked as if the country was heading for an eventual
> race war. Today that seems impossible unless angry white guys start
> shooting.

It's funny how things that seem impossible one year can start to seem much
more possible a few years later. Problems that you think have gone away
sometimes resurface.

Y2K was a good example of something that didn't happen, and probably in large
part because everyone was working to stop it from happening. But to claim that
it will always work that way is to assume that there is always something
significant that can be done by the people who care.

There often is, but there are a lot of forces that work against this too -
collective action is always hard, and it gets harder when it has prisoners-
dilemma features or wealthy opponents who are better at spreading their
viewpoint. Some problems may well have solutions that are too radical or
costly to be deployed in the time that we have, and the fact that we haven't
seen many catastrophic problems like that so far is a form of the anthropic
argument.

The other thing is that the 'problems always go away or get drastically
ameliorated if they are predicted by society', is an interesting example of an
argument that undercuts itself. If enough people believe it, it'll stop being
true.

~~~
arinod
> Y2K was a good example of something that didn't happen, and probably in
> large part because everyone was working to stop it from happening. But to
> claim that it will always work that way is to assume that there is always
> something significant that can be done by the people who care.

I worked on adjustments in cobol code for the y2k bug in accounting systems.
We had to start well before, because contract with payment in the future did
not work. Cobol at that time had no date type and stored the data as 3
integers, day, month and year. The job was basically to redo the calculation
of terms adding 100 to year when it was greater than 00 and less than 50.
(2000 to 2050). Anyway, who had a system susceptible to the bug, had no option
but to make adjustments or lose their payment controls.

Edit: ading 100 to year, not 2000

~~~
acqq
I've led the team that worked for more than two years to prepare our products
for the Y2K, in real time finance, and there was not a single but many points
of failure, some completely not obvious before they were detected, so most of
the time was spent on developing different tests and procedures to actually in
advance test, verify and diagnose the big amounts of code where anything could
be implicitly based on the assumptions that won't be true at 2000.

When there were (almost) no issues at the begin of 2000, it was because of a
lot of concentrated work correctly done before, early enough. The luck was
that the problem was in the consciousness of everybody making decisions, and
that those making decisions had a good motivation to do the changes (e.g. the
company can charge for fixes, or the company will not be able to sell the
product if it's not fixed).

The real danger is when those who make decisions (and those who directly
influence them) have a good motivation (from their narrow point of view) to
continue ignoring the problem.

That's how the environments described by Octavia Butler develop.

------
dredmorbius
_How many combinations of unintended consequences and human reactions to them
does it take to detour us into a future that seems to defy any obvious trend?
Not many. That’s why predicting the future accurately is so difficult. Some of
the most mistaken predictions I’ve seen are of the straight-line
variety–that’s the kind that ignores the inevitability of unintended
consequences, ignores our often less-than-logical reactions to them, and says
simply, “In the future, we will have more and more of whatever’s holding our
attention right now.”_

I've been reading Alvin Toffler's _Future Shock_ (1970) for the first time on
this its 50th anniversary, and in assessing its predictions and projections,
the ones which seem most accurate typically involve side effects, interactions
(often negative), an unintended consequences. Those least accurate come from
advocates of a specific technology or product.

Butler's advice is excellent.

------
aidenn0
> The young man was referring to the troubles I’d described in Parable of the
> Sower and Parable of the Talents, novels that take place in a near future of
> increasing drug addiction and illiteracy, marked by the popularity of
> prisons and the unpopularity of public schools, the vast and growing gap
> between the rich and everyone else, and the whole nasty family of problems
> brought on by global warming.

The first of those books was published in 1993. Most of the shit happening
right now was already being talked about as a problem then.

~~~
bawolff
That series literally has a facist president takeover using the slogan "Make
America Great Again". Its pretty prescient.

~~~
m463
Or a recipe for takeover.

Actually the good part about science fiction is that it can imagine good
trajectories too. Elon Musk has mentioned many times that reading science
fiction was a formative experience for him.

------
sbelskie
I’ve never read Octavia Butler, but the first paragraph has me convinced. I’ll
be ordering a few books tonight.

~~~
jf
Her books are well worth reading. Prepare to be delighted, intrigued, and
disturbed in equal parts.

~~~
sbelskie
That’s the trifecta right there. Even more excited now.

------
just_steve_h
I miss her voice. She was a singular talent.

Toshi Reagon created a musical version of The Parable of the Sower that moved
me to tears - highly recommended if you can find a video of the performance.

------
11001001011010
Rule number one: Expect people making the same mistakes as they did in any
past. Why? Because men are stupid, ignorant, shortsighted, greedy, arrogant,
etc.. etc.. In all our vanity we love to think and dream about ourselves of
being capable to change our human nature and the course of humanity, but in
reality only very few can because it is unimaginable hard. Almost all
religions point to this.

~~~
mistermann
Many things were unimaginably hard in the past, until we figured out how to do
them. Is it impossible that this may also be the case, to some noteworthy
degree, with the human shortcomings you have accurately identified?

Some people think that the abstract idea of the ego (or more generally, _the
illusory nature of human consciousness_ ) plays a very big role in the pattern
of follies that mankind seems to repeat across generations and cultures.
Consider an idea: if you are engaged in an undertaking (say, designing a
somewhat sophisticated machine) where decisions are based on measurements, and
your measurements often happen to be incorrect (sometimes incredibly
incorrect, to the point of being the opposite of what is true), should one be
surprised when the outcome is often other than predicted? And if for some
reason the idea never occurs to you that such a flaw exists in the system, or
you realize there is a flaw but incorrectly consider it to be unimportant,
should one be surprised that problems persists over time?

------
MichaelZuo
It seems that future prediction is a common enough activity that there should
be a few rules for predicting the predictions too.

~~~
Animats
Bill Gates has said that predictions about technology tend to overestimate the
near term and underestimate the far term.

~~~
T-A
[https://en.wikipedia.org/wiki/Roy_Amara#Amara's_law](https://en.wikipedia.org/wiki/Roy_Amara#Amara's_law)

