
The Complexity Barrier (2003) - preordained
http://babylon.acad.cai.cam.ac.uk/people/dmh/engineering/engineer03/cecomplexity.htm
======
RGamma
"The good news about computers is that they do what you tell them to do. The
bad news is that they do what you tell them to do."

Short of super-intelligent general AI writing programs for us yadda yadda, I
believe the answer will be to adopt formal methods more widely: Bake the
intended semantics of your software with regards to some formal system into
the program itself and have it only be valid if you or the computer can prove
that it meets that specification (and have the specification be complete and
correct of course (which can be difficult when you consider what e.g.
cryptographic strength or "real-time properties" actually mean); if proof is
not feasible, add in checks that make it fail gracefully at least or generate
tests).

Where today the "engineering mindset" is more prevalent (Look, I can do things
with it! And it does them correctly I hope.), there needs to be a shift to a
more mathematical view on computation (Your program is an instance of a
certain calculus and claiming to fulfill a formal specification, which needs
proof), if the software complexity mentioned is to be managed better (there's
a whole zoo of specification techniques and low-level formal systems
available, which doesn't make this any easier).

Unfortunately support for any of this is severely lacking in popular tools
(yes, there's a lot going on in academia, but I don't feel this has spilled
over yet): Languages lacking a formal spec and changing frequently, compilers
not proven correct, programs running on buggy processors, operating systems
with unclear semantics, written in (from a type-system PoV) weakly-expressive
languages using meaningless types (void, ugh), black-box peripherals. In that
light I'm amazed my computer does anything at all, but clearly most of the
time these things don't matter.

Then again, I'm not aware of a fully-featured and usable (comparatively to
today's personal computers) formally-specified (whatever this would mean here,
surely there's more than one way to define an OS for instance) and proven
computing system that goes from high-level language to operating system and
application ecosystem to whatever machinery is used at the bottom.

Coq and co. to the rescue (see e.g.
[http://compcert.inria.fr/](http://compcert.inria.fr/))?

~~~
krisgee
>and have it only be valid if you or the computer can prove that it meets that
specification (and have the specification be complete and correct of course

Hasn't this just moved all the issues with writing code over to writing the
specifications?

~~~
RGamma
Somewhat, yes.

A specification framework is there to help bridge the semantic gap between
what you are trying to achieve in the real world and the world of bit-banging
if you will. And even if you forget a part of a specification (like saying a
sorting function needs to leave behind a sorted collection but forgetting to
mention that the input and output collection need to contain the exact same
elements or that the input collection needs to be finite for the function to
terminate), you'll still have a notion of "incremental correctness". That's
why I added in the part with what specifying "cryptographic strength" would
mean (e.g.: strong against what exactly?). You could leave that or time-
critical properties (e.g.: this function encrypts x bits in y seconds) out and
retain the notion of "functional correctness" (i.e. the ciphertext always
corresponds to what the definition says it should be).

When you're writing code you'll (most of the time :)) have an idea of what
you're trying to achieve. Formal specification should enable you to write that
down in convenient form.

------
aab0
The disease one is pretty silly. How awful, now that we've eliminated smallpox
and have cures for tuberculosis, malaria, and all the other major killers of
humanity we are

"increasingly concerned by new rather subtle diseases (SARS, HIV, BSE etc,)
which are very difficult to understand, often difficult to diagnose or only
become apparent long after the infection, and which often seem to originate
through some change introduced by modern practices, such as use of antibiotics
on farm animals or a change in the practices used in preparing meat."

Except new diseases have been constantly arising through human history and
those examples arise from the usual ancient zoonotic transmission routes:
eating wild meat and living in close proximity with animals. (With the
exception of BSE, which inherently will never be anything but an extremely
trivial disease.) The only thing 'modern' or 'complex' about them is the
names, and that medicine has advanced past witchcraft and is able to notice
them.

------
__jal
I've been pondering a similar idea. It is more about optimization and failure
tolerance.

Highly optimized systems are great, until they aren't. F1 race cars are
notoriously touchy and require a team of specialists to keep running, while a
1950's pickup truck I happen to know of may be slow, uncomfortable and
horribly polluting, but has been in operation for pushing 70 years and is
currently maintained by a single person who doesn't have to do much to it.

Just in time inventory control is great for margins, until, for instance, the
bulk of the hard drive production capacity in the world is flooded.

Optimization and resilience frequently sit in tension, if not outright
opposition.

------
DonaldFisk
Joseph Tainter has a theory (The Collapse of Complex Societies) that
civilizations gradually increase in complexity, then collapse when the
benefits of additional complexity become negative.

Collapse can be, and occasionally has been, prevented through making
simplifications, but there's usually resistance to that. There's only one
civilization left. If it collapses, collapse will be global.

------
zbobet2012
I believe _truly_ complex systems will mimic the organic ones which are so
successful. They will be chaotic with systems that have an emergent property
of order.

These systems will _gain_ from chaotic input rather than fail.

For More: [https://www.quantamagazine.org/20140122-a-new-physics-
theory...](https://www.quantamagazine.org/20140122-a-new-physics-theory-of-
life/)

------
sliverstorm
_The central processor in my computer contains about 10,000,000 transistors._

The party may not last forever, but VLSI is basically a profession of scale.
It stands for _Very Large Scale Integration_. Tools continue to improve,
allowing ever more complex designs to be managed by the same team of people.
And we aren't talking factors of 2x.

------
GrumpyYoungMan
Tangentially related, the SF author Roger McBride Allen mentions a similar
concept as part of the backdrop of one of his novels, "The Ring of Charon". I
actually find the idea more and more plausible as time goes on. The relevant
excerpt (which is also the only detailed mention in the book) is:

" _... And hatred for the Knowledge Crash. If you could hate something that
might not even have happened. That was perhaps the surpassing irony: no one
was ever quite sure if the Knowledge Crash had even taken place. Some argued
that the very state of being uncertain whether or not the Crash had occurred
proved that it had.

Briefly put, the K-Crash theory was that Earth had reached the point where
additional education, improved (but more expensive) technology, more and
better information, and faster communications had negative value.

If, the theory went on, there had not been a Knowledge Crash, the state of the
world information economy would be orderly enough to confirm the fact that it
hadn’t happened. That chaos and uncertainty held such sway therefore
demonstrated that the appropriate information wasn’t being handled properly.
QED, the Crash was real.

An economic collapse had come, that much was certain. Now that the economy was
a mess, learned economists were pointing quite precisely at this point in the
graph, or that part of the table, or that stage in the actuarial tables to
explain why. Everyone could predict it, now that it had happened, and there
were as many theories as predictions. The Knowledge Crash was merely the most
popular idea.

But correct or not, the K-Crash theory was as good an explanation as any for
what had happened to the Earth’s economy. Certainly there had to be some
reason for the global downturn. Just as certainly, there had been a great deal
of knowledge, coming in from many sources, headed toward a lot of people, for
a long time.

The cultural radicals—the Naked Purples, the Final Clan, all of them—were
supposed to be a direct offshoot of the same info-neurosis that had ultimately
caused the Crash. There were whole communities who rejected the overinformed
lifestyle of Earth and reached for something else—anything else—so long as it
was different. Raphael did not approve of the rads. But he could easily
believe they were pushed over the edge by societal neuroses.

The mental institutions of Earth were full of info-neurotics, people who had
simply become overwhelmed by all they needed to know. Information psychosis
was an officially recognized—and highly prevalent—mental disorder. Living in
the modern world simply took more knowledge than some people were capable of
absorbing. The age-old coping mechanisms of denial, withdrawal, phobic
reaction and regression expressed themselves in response to brand-new mental
crises.

Granted, therefore, that too much data could give a person a nervous
breakdown. Could the same thing have happened to the whole planet?

The time needed for the training required to do the average technical job was
sucking up the time that should have gone to doing the job. There were cases,
far too many of them, of workers going straight from training program to
retirement, with never a day of productive labor in between. Such cases were
extreme, but for many professions, the initial training period was
substantially longer than the period of productive labor—and the need for
periodic retraining only made the situation worse.

Not merely the time, but the expense required for all that training was
incredible. No matter how it was subsidized or reapportioned or provided via
scholarship or grant program, the education was expensive, a substantial drain
on the Gross Planetary Product.

Bloated with information, choked with the of a world-girdling bureaucracy
required to track information and put it to use, strangled by the data
security nets that kept knowledge out of the wrong hands, lost in the endless
maze of storing and accessing all the data required merely to keep things on
an even keel, Earth’s economy had simply ground to a halt. The world was so
busy learning how to work that it never got the chance to do the work. The
planet was losing so much time gathering vital data that it didn’t have a
chance to put the data to use. Earth’s economy was writhing in agony. ..._"

------
marcosdumay
We manage complexity with abstractions. Works well on software.

The problem is, how do we apply that to real things? I guess I'll hear
"robots" somewhere on the answer, but don't really know where.

~~~
tpeo
Open standards and modular design, I'd believe. It's easier to handle a
mechanism with recognizable and well-known parts. Proper documentation would
help too.

I'm not so sure hardware has gone this way, though.

------
dang
Can anybody figure out the year of this? Content suggests mid-90s but I
haven't been able to pin it down.

Edit: wow you guys. I'm impressed. 2003 it is.

~~~
feral
I would go with: End September / Start October, 2003

Based on the evidence the other posters mentioned, and: "In the last few days
the entire area of mainland Italy was hit and some 50,000,000 people were left
without electricity."

and Google finding this (only blackout that size that comes up):
[http://news.bbc.co.uk/2/hi/europe/3150788.stm](http://news.bbc.co.uk/2/hi/europe/3150788.stm)

~~~
vitus
Further, point 5 refers to the SARS outbreak ('02-'03) and BSE (mad cow
disease, which I believe entered mainstream US news around '03-'04), so I'd
place it another year out, in '04.

I do find it a bit bizarre that he cites his CPU's clock rate in Mbps, though,
since it makes it a bit more difficult to identify his particular CPU.
Further, processors in the early-mid 2000s had 10x as many transistors per
chip. Although, it's not unreasonable to assume he was simply using a computer
that was several generations old.

edit: ah, I see. Specific pointer to relative dates in the article. Good
catch!

~~~
feral
> edit: ah, I see. Specific pointer to relative dates in the article. Good
> catch!

Also, your post spurred me to more searching:
[http://babylon.acad.cai.cam.ac.uk/people/dmh/engineering/eng...](http://babylon.acad.cai.cam.ac.uk/people/dmh/engineering/engineer03/)

File modification dates between 30 Sept and 06 Oct 2003! The easy way to do it
:-)

------
S_Daedalus
This is a pretty old concern, and I'll worry about it when it looks like
computers are going to be taking over their own design. Right now, it's a
trick just to get them to learn unsupervised.

