
A World Run with Code - dustinupdyke
https://blog.stephenwolfram.com/2019/05/a-world-run-with-code/
======
daenz
>Right now, most of what the computers in the world do is to execute tasks we
basically initiate. But increasingly our world is going to involve computers
autonomously interacting with each other, according to computational
contracts. Once something happens in the world—some computational fact is
established—we’ll quickly see cascades of computational contracts executing.
And there’ll be all sorts of complicated intrinsic randomness in the
interactions of different computational acts.

I don't think it takes a math genius to see how this is a bad idea. In the
same way that trading algorithms can get into a feedback loop that crash
markets, these "computational contracts" can cause cascading failures that
hurt society as a whole. This is why human intelligence is so critical in
running society: it has the ability to question whether its "programming" is
correct and having the intended effects, and adjust accordingly. Computational
contracts have no such introspection, by definition. They resolve because the
rules are satisfied, for better or worse.

And all of that isn't even considering the attack surface area for malicious
actors to target.

~~~
raz32dust
In the same vein, "Human intelligence" led to the 2008 crash. Human's aren't
infallible either. My problem with this argument is that there is no logical
basis for how exactly human intelligence is different. Computers aren't there
yet in most domains, but there is no evidence to say they can't get there. How
soon is anybody's guess right now.

While AGI might be far off, I can certainly imagine computers running larger
and larger sub-worlds. e.g, if all cars were self-driving, I am reasonably
sure we can design traffic to be more efficient with all cars coordinating
with each other instead of humans trying to do so.

~~~
jerf
"My problem with this argument is that there is no logical basis for how
exactly human intelligence is different."

I can give you a _partial_ answer, which is that human intelligence is
somewhat slower, which mitigates the ability to crash the entire economy in 15
seconds. That's why a stock market can crash in 15 seconds nowadays, but "the
housing market" can't. This gives people some time to do some things about it
with some degree of thought, rather than all the agents in the system suddenly
being forced to act in whatever manner they can afford to act with 15
milliseconds of computation to decide.

~~~
robbiep
That’s also why there are trading halts built into all major exchanges, so if
something triggers a large fall, there is a circuit breaker in the decision
process of 18-48 hours for people to process and digest information

~~~
gricardo99
Just FYI, NYSE Rule 80B[1] halts trading for 15 minutes, or until the next day
if the first 2 levels are breached. CME follows similar rules.

1 - [http://wallstreet.cch.com/nyse/rules/nyse-
rules/chp_1_3/chp_...](http://wallstreet.cch.com/nyse/rules/nyse-
rules/chp_1_3/chp_1_3_4/chp_1_3_4_21/default.asp)

~~~
greenyouse
Just for fun here's a link to a 2010 flash crash before those rules were in
place where a trillion dollars were dropped in half an hour due to algorithms.

[https://en.wikipedia.org/wiki/2010_Flash_Crash](https://en.wikipedia.org/wiki/2010_Flash_Crash)

------
jwr
I find Stephen Wolfram to be a an interesting person. On one hand, he is
undeniably exceptional and created an impressive computational system. On the
other hand, the "Wolfram language" is really a pretty poor design as far as
programming languages go, and would not even be noticed if it wasn't for the
giant standard library that gets shipped with it, called Mathematica. I use
the "Wolfram language" because I have to, not because I want to.

In other words, if I could easily use the Mathematica library from Clojure, I
wouldn't give the Wolfram Language a second glance. I can't think of even a
single language aspect that would tempt me to use the Wolfram Language
[context: I've been a Mathematica user since 1993] over Clojure. I have (much)
better data structures, a consistent library for dealing with them,
transducers, core.async, good support for parallel computation, pattern
matching through core.match, and finally a decent programming interface with a
REPL, which I can use from an editor that does paren-matching for me (ever
tried to debug a complex Mathematica expression with mismatched
parens/brackets/braces?).

This is why the man is a contradiction: his thoughts are undeniably
interesting, but his focus and anchoring in the "Wolfram Language" is jarring.

~~~
kodz4
It's quite clear what he is saying. Programming languages are what programmers
use. And how programmers think is dominated by how a computer works, which is
usually a gigantic distraction to actually solving real world problems.

Newton, Maxwell and Einstein didn't need to waste any of their time thinking
about how to use a computer to solve the problems they worked on.

If I ask Google for the 2018 Wimbledon Champ it tells me it has found
47,00,000 results in (0.90 seconds). Take a step back and think about this.

They have their own knowledge graph. They have Wikipedia access. They have the
ATP site cached. They have the Wimbledon site cached. But they aren't able to
tell the problem being solved doesn't need 4.7 Million results.

This is the kind of mindlessness that happens when the focus is not on the
actual problem, but what the computer can do.

~~~
salty_biscuits
"Newton, Maxwell and Einstein didn't need to waste any of their time thinking
about how to use a computer to solve the problems they worked on."

Actually a lost bit of maths history is that computation ability was really
important, it was just really manual. Think about Gauss doing least squares to
find the orbital parameters of Ceres by hand! Then the stories about him being
able to multiply large numbers in his head start to make a bit more sense, not
just as a parlor trick.

------
placebo
Not directly related to the article, but going through the comments, I felt
the vibes and wanted to express my theory about what it is that irritates many
of the commenters here

The highest pursuit is the search for truth. This includes the challenging act
of discarding things that we'd really like to be true, but are false. The
success of science can be attributed to extreme selfless intellectual honesty.
The more ego gets in the way, the more the truth is compromised. Intelligence
can be used in service of finding truth or in feeding our delusions. My view
on the distinctions made here between genius and madness are that they
correspond to the degree in which intelligence serves truth or delusions.
Therefore I'd expect the most outstanding scientists in history were also very
humble (perhaps someone with better knowledge on the personality of great
scientists can shed more light on this).

And to keep things consistent - I might be wrong and thus welcome challenges
to this theory :)

~~~
vbuwivbiu
While I definitely agree with you that this is the general pattern in life,
the thing is that people aren't always either humble or arrogant, and it is
possible to see the truth while being arrogant too. Even Wolfram must have
days, or hours, maybe minutes - let's say seconds - when he's humble. And
sometimes he also sees the truth while being arrogant.

Therefor it's a form of arrogance to dismiss (generally) arrogant people as
always wrong. We have to be selective or we might miss something important
they've seen.

Wolfram is a combination of irritating ego and inspiration.

~~~
placebo
You are absolutely correct and I even deleted an entire section in my original
post how things are never just black or white but it got too long. Thanks for
bringing it up.

------
keerthiko
I had this thought when I was 13 and first learned the basics of programming
"hey dad if everything is determined by physics and chemistry and computers
are real good at math can't we analytically deduce the future."

I eventually learned this was just the 13 year old's version of "why's the sky
blue?" into "but why is Rayleigh scattering a thing?" and there are several
limitations to both human understanding of science, and theoretical
computation limits -- a computer to have sufficient accuracy to model the
world would by definition need to have as much memory as the world, and model
itself _in_ it. I moved on from that idea shortly after learning of that.

Is Stephen Wolfram just an overgrown child? Maybe unironically that's what
being a genius is about.

~~~
as300
> a computer to have sufficient accuracy to model the world would by
> definition need to have as much memory as the world, and model itself in it.

But what if we had certain "compression" abilities that allowed us to simplify
the world? Similar to how storing an audio codec lets us massively minimize
how much storage we need for music?

~~~
keerthiko
Magical compression algorithms that have amazing compression ratios _and_
amazingly computational cost to reasonably decompress to process the data
could theoretically raise the upper bound of a computational world. But
compression cannot solve for an unbounded recursion problem in the simulator
having to model itself it's model of the world, unless we arrived at an
analytical closed-form solution for any prediction rather than relying on any
modeling.

------
sktrdie
To be honest I haven’t understood much of this.

Am I also the only one who’s very skeptical of AI? I see no correlation
between what we call “biological thinking” and computation.

Even though I don’t know much of the theory behind AI, to me it’s similar to
saying that since we have lots of simple calculators, we can arrange them
together in some specific way and emergent intelligence will arise.

Sure yes I mean but you can say that about everything: let’s arrange a bunch
of forks together and intelligence might emerge.

And actually from a math point of view you could luckily arrange some forks
together and have intelligence since intelligence seems to emerge from an
arrangement of atoms.

I don’t see why computation has to get a go at intelligent, while anything
else not. What’s so unique about computation?

~~~
tlb
The last 80 years have had a string of successes for arranging simple
calculators in ways that produce impressive results. Google search, world
champion chess players, Fortnite are all examples.

I don't know of any impressive results with arranging forks. If there were,
you could model the fork behavior on computers and probably run it 1000000x
faster.

It's certainly possible that some other elements than the simple calculators
we use today will lead to the big breakthrough in AI. Perhaps quantum
computation is needed. But right now, arrangements of simple calculators seem
like the most promising.

~~~
goatlover
> The last 80 years have had a string of successes for arranging simple
> calculators in ways that produce impressive results.

I view that as biological intelligence figuring out all sorts of clever ways
to make calculating tools produce impressive results.

> Perhaps quantum computation is needed.

But why invoke physics if we're comparing biological intelligence to our
computing tools? Isn't it enough to note the big differences? And here I'm not
talking so much brain function as I am being a living, social animal that has
to survive with a vast human culture.

This is the difference between looking at computers as as stepping stones to
artificial replacements as opposed to enhancing our abilities. The AI stuff
captures the imagination, promises fully automated utopias, scares us with
apocalyptic scenarios, and is the stuff of lots of SciFi. While the reality is
that computers have always been tools aiding human intelligence.

For some reason we view these tools as one day being like Pinocchio. The
mythos of the movie AI is based on that vision where the robot boy becomes
obsessed with a the blue fairy turning him into a real boy so his adoptive
biological mother will love him and take him back.

But maybe like in the movie, someday the sentient robots will show up and gave
our robot boy his wish.

------
laythea
I watch his live CEOing youtube occasionally. Is quite interesting, but at
some point I feel like I am stuck in a work meeting, with no real objections
(understandably) being posed to Wolframs proposals.

I also think the word computation is used a bit too grandiosely by Wolfram.
Which is evidenced in the writing here.

I do admire Wolfram for even advertising his CEOing meetings though. He goes
into the detail to a level you would not necessarily think a CEO would do.
Credit where credit is due, he is not shy. Many CEO's would not bother.

I mean, technically, everything is a computation, but we accredit actually
complex things to that term, not everything. To use it "whilly-nilly" deflates
the words impact.

------
vertline3
The idea of small rules leading to large patterns is not something Wolfram
invented, but he seems to think he did. I think Conway's game of life predates
it, as does Mandelbrot, then there is the Durer pentagon.

Maybe there is a something I am missing though?

~~~
AnimalMuppet
From what I have heard, what you're missing is Wolfram's ego. That's why he
magnifies the importance of what he has done.

~~~
OneWordSoln
I mean, Mathematica is a truly great, great piece of software, but his "New
Kind of Science" was both utterly worthless and a fantastic treatise on how a
person's grandiose religion of self leads to utter delusion.

~~~
zach_garwood
I started to read it when I college, and read hundreds of pages into it before
I started to realize that there was nothing there. The entire book can be
summed up as "complexity arises from multiple iterations on simplicity" and,
like, you can teach that to a middle schooler in a day, no need for a
thousands of pages-long tome.

------
rodrigosetti
So he's comparing his eponymous programming language to the arrival of
mathematical notation 400 or so years ago, and the advent of written language.

~~~
dreamcompiler
Yep. That kind of thing is pretty much par for the course for Wolfram.

------
cerealbad
perhaps wolfram is parodying his main idea in his work output. from a basic
origin he's creating monstrous bloat and complexity, as a type of
demonstrative self-referential proof.

there are people who write code until it works, and people who rewrite until
it doesn't. room enough for both.

~~~
marmaduke
Excellent observation. Perhaps it’s a twist on Conway’s Law?

------
trollied
If anyone is interested, Wolfram does what he calls “Live CEOing”, and streams
his sprints on YouTube:
[https://www.youtube.com/user/WolframResearch](https://www.youtube.com/user/WolframResearch)

~~~
stblack
These sessions are streamed live on Twitch, too.

These sessions are vastly interesting. I think many commenters here should
listen-in sometime.

The Wolfram Language is hard because mapping knowledge is hard.

What many people here call "ego" is one guy, and his talented team, tackling
an enormous problem nobody has licked. So far.

------
swah
I love some optimistic view of software like I had when I first started into
computers.

------
dgfx
Think of all the people in prison, justly or unjustly, under our current
system where humans have to interpret and carry out our legal code. We may
extend these practices into every domain at negligible cost. Why do we desire
authority without conscience? I can think of two reasons: one, we do not wish
to hold ourselves accountable; two, we desire to see the strong afflict the
weak.

------
new4thaccount
Did he claim to invent computational irreducibility? Wasn't that Goedel or
Turing or someone like that?

~~~
marcosdumay
He claims it often. Yes, it was Turing, mostly based on previous work.

The worst part is that he misses the detail it being possible to describe all
of Math as computation (so they are perfectly equivalent) and creates entire
books with keen observations of how much of Math one can describe as
computers.

------
neptvn
Wolfram might be a genius, but I personally find his writing and lectures
extremely dull. The only thing he seems to write is how everything is
"computational", and how we need to embrace his idea that computing is what
the whole universe is about. For example in this blog post alone, we can find
this following word count:

    
    
      computer - 19x
      computation* - 96x
      computational - 63x
    

Make everything computational!

    
    
      computational intelligence - 2x
      computational contracts - 7x
      computational universe - 7x
      computational language - 18x
      computational thinking - 2x
      computational fact - 1x
      computational acts - 1x
      computational equivalence - 8x
      computational irreducibility - 6x
      computational system - 1x
      computational process - 1x
      computational work - 1x
      computational essays - 2x
      computational law - 1x
    

Throwing all these words around may sound smart but they lose any meaning or
relevance that they were supposed to deliver if being overused in such a
larger-than-life manner.

</rant>

~~~
bitwize
Welcome to Stephen Wolfram. Wolfram is madly in love with his own ideas, even
when those ideas are not nearly as original or world-shakingly profound as he
seems to think they are. That he is a genius does not mean he is not in a
state of arrested development; prodigious genius like his can impede one's
development because people around the genius are so in awe of them that they
are unwilling to give them the much-needed corrections and reality checks it
takes to grow.

~~~
hinkley
It's very hard to teach someone whose cup is full. Very smart people can
rationalize their bad ideas faster than other people can talk them out of
them. But that doesn't mean you've actually explained yourself. If you can't
explain things to other people, do you really understand it yourself?

At this point Wolfram is lost to us. "What the hell are you talking about,
Steve?". I was just fantasizing about resurrecting Richard Feynman and having
him ask Wolfram this question but it turns out I don't have to:

In a letter from Richard Feynman to Stephen Wolfram:

> You don’t understand "ordinary people." To you they are "stupid fools" \- so
> you will not tolerate them or treat their foibles with tolerance or patience
> - but will drive yourself wild (or they will drive you wild) trying to deal
> with them in an effective way.

> Find a way to do your research with as little contact with non-technical
> people as possible, with one exception, fall madly in love! That is my
> advice, my friend.

~~~
codetrotter
> If you can't explain things to other people, do you really understand it
> yourself?

I hear this said every now and then and I don’t think I agree with it.

A lot of knowledge builds on other knowledge. And the other knowledge might be
required in order to understand the knowledge that one wishes to share.

So then either you have to explain it all, or you will have to simplify
dramatically.

Few people will want the full explanation — it would take _much_ more time to
explain than they would ever want to spend listening. And I am not saying that
to criticize those people. The same goes for myself — there are very many
things I wish I could understand deeply. But there simply is not enough time.
Specialization is necessary. We must pick the few things that interest or have
the most use to us, if we wish to have a chance to truly understand any topic
deeply at all. And even in the topics we choose there will be a lot of
subtopics that we won’t ever have time to understand as fully as we would
desire.

Even if they did want to listen the amount of information could be so huge
that it might take several years to explain all of it. And so it can’t be
done.

So instead we have to simplify. A lot. Unfortunately, when we simplify we
often end up skipping crucial information and might even communicate the
knowledge wrongly. In the cases where at least we don’t communicate it wrongly
there is still a significant risk I think that one or several of

\- the truth,

\- the value, and/or

\- the utility

of the knowledge we try to share is not possible to see without a lot of
context that would take too much time to explain.

I have seen time and time again other people jump from

\- the fact that someone was unable to explain something in a satisfying way

to

\- concluding that the person that tried to explain it doesn’t himself/herself
understand it

When really I think the real conclusion should instead be that it is
impossible to determine at that present time whether what is being said is
correct. But that neither means that the other person doesn’t know, nor that
they do know!

I think the fundamental problem is that we keep insisting that things should
be simple when the reality of the matter is that they just aren’t. Or perhaps
there _is_ some simpler answer but the only path known by the person to
understanding the knowledge they possess is based on the knowledge they
learned before they learned the knowledge in question. Then it could be
possible for that person to boil it down to something that does indeed both
hold truth and is simple to explain. And perhaps that it is what it means to
understand something at the deepest possible level — to have the explanation
worked out so that it builds on the fewest and most simple prior knowledge
required. But to sit down and make those simplifications could also take
years. And unless it is the job or the desire of the person holding the
knowledge to spend all of that time they could spend on other things just to
work out the simplest possible explanation of the knowledge then it won’t make
sense for them to do so.

There are a few things more I would like to say about my thoughts on this too
but I think this comment of mine has probably turned into a big wall of text
already. So instead I will conclude with saying that I think that the ability
of one person to explain something to someone else

\- relies on their own understanding yes

but

\- it relies to a much greater extent on the amount of knowledge that both
parties already share

and

\- a lot of things in daily life builds on things we all know already, and
does not extend it too much, so it is quick to explain and simple to
understand

but

\- some knowledge is so far removed from shared knowledge, at least by the
path to it known by the person trying to explain it, that it is for all
practical purposes impossible for that person to explain to the other

~~~
SantalBlush
You highlight a common fallacy that is repeated across the internet: "If you
can't explain to me why I'm wrong, then I must be right," or something
similar. But as you point out, this assumes that an explanation can be written
in just a few paragraphs, which usually isn't the case. It takes years of
study and experience to become an expert in a given field, and as a result, it
becomes too difficult to accurately convey the necessary information to get a
point across.

------
furyofantares
As much as I wish Wolfram would dial back the egocentric tone in places, this
is overall an excellent read

~~~
TazeTSchnitzel
I wish he'd cite sources rather than implying by omission that everything is
his idea. There's a reason _A New Kind of Science_ was self-published.

~~~
fractalf
Bought the book some years ago, but never really got through it. Keeping it
handy though in case I need to knock out a burglar. Thing is massive, shipping
was half the cost ;)

------
memory_grep
> And, yes, there’s only basically one full computational language that exists
> in the world today—and it’s the one I’ve spent the past three decades
> building—the Wolfram Language.

:/

~~~
OneWordSoln
If he means, from a mathematical point of view, Mathematica's ability to both
symbolically and numerically solve advanced mathematical systems of equations,
he's not technically wrong, even if his oversized ego oozes out of every word
he utters.

If, however, he's talking about general computational systems vis a vis
creating programs to run on today's microprocessors, he has obviously drunk
too much of his own kool-aid.

~~~
sgillen
> ability to both symbolically and numerically solve advanced mathematical
> systems of equations.

I mean, python, matlab, julia, octave, sage, and maple would all fit that
definition I think. I do think Mathematica's CAS is the best in the business
but not the only player for sure.

I'm sure Steven has something in mind that sets Wolfram Language apart, just
not sure what that is.

