Hacker News new | past | comments | ask | show | jobs | submit login
A World Run with Code (stephenwolfram.com)
194 points by dustinupdyke on May 3, 2019 | hide | past | favorite | 109 comments



>Right now, most of what the computers in the world do is to execute tasks we basically initiate. But increasingly our world is going to involve computers autonomously interacting with each other, according to computational contracts. Once something happens in the world—some computational fact is established—we’ll quickly see cascades of computational contracts executing. And there’ll be all sorts of complicated intrinsic randomness in the interactions of different computational acts.

I don't think it takes a math genius to see how this is a bad idea. In the same way that trading algorithms can get into a feedback loop that crash markets, these "computational contracts" can cause cascading failures that hurt society as a whole. This is why human intelligence is so critical in running society: it has the ability to question whether its "programming" is correct and having the intended effects, and adjust accordingly. Computational contracts have no such introspection, by definition. They resolve because the rules are satisfied, for better or worse.

And all of that isn't even considering the attack surface area for malicious actors to target.


In the same vein, "Human intelligence" led to the 2008 crash. Human's aren't infallible either. My problem with this argument is that there is no logical basis for how exactly human intelligence is different. Computers aren't there yet in most domains, but there is no evidence to say they can't get there. How soon is anybody's guess right now.

While AGI might be far off, I can certainly imagine computers running larger and larger sub-worlds. e.g, if all cars were self-driving, I am reasonably sure we can design traffic to be more efficient with all cars coordinating with each other instead of humans trying to do so.


"My problem with this argument is that there is no logical basis for how exactly human intelligence is different."

I can give you a partial answer, which is that human intelligence is somewhat slower, which mitigates the ability to crash the entire economy in 15 seconds. That's why a stock market can crash in 15 seconds nowadays, but "the housing market" can't. This gives people some time to do some things about it with some degree of thought, rather than all the agents in the system suddenly being forced to act in whatever manner they can afford to act with 15 milliseconds of computation to decide.


That’s also why there are trading halts built into all major exchanges, so if something triggers a large fall, there is a circuit breaker in the decision process of 18-48 hours for people to process and digest information


Just FYI, NYSE Rule 80B[1] halts trading for 15 minutes, or until the next day if the first 2 levels are breached. CME follows similar rules.

1 - http://wallstreet.cch.com/nyse/rules/nyse-rules/chp_1_3/chp_...


Just for fun here's a link to a 2010 flash crash before those rules were in place where a trillion dollars were dropped in half an hour due to algorithms.

https://en.wikipedia.org/wiki/2010_Flash_Crash


Maybe if economy can be crashed in 15 seconds it should be crashed in 15 seconds do we can have a saner economy?

Letting AI in is like fuzzing for computer programs. It can break your program in interesting ways but that's a good thing.


> Maybe if economy can be crashed in 15 seconds it should be crashed in 15 seconds do we can have a saner economy?

How do you know that would be the outcome?


Numbers getting smaller on the stock market doesn't destroy any real value. Companies still have all the IP they had before, all the real estate, all machines... If the bits in some computer's RAM are so disconnected from reality that they can change so dramatically without any catastrophe in the real world, then the rational choice is to disregard those bits.


Crashed markets reflect a real difference in what people are prepared to pay for assets. That has a real direct impact on how people plan and behave.


Slower only because humans need more time to arrive at the same conclusions, and use more data points. Either one can be made true of computers too.


Qualitatively, humans are likely to say, "Whoa! Hold on. This doesn't make sense." when presented with unreasonable data, while algorithms are likely to proceed and do whatever.

We don't have good programming abstractions for "This doesn't make sense." You want something between proceeding and aborting.


Why not? Probability + threshold/activation function in neural nets is basically modeling that.


I read once that a soldier in Russia was supposed to fire off nuclear warheads once when faulty sensors erroneously indicated that America had already done so.

Had the system been a mathematical contract, we would all be dead. Instead, a hero determined sat pondering whether he should do it and then realized the sensors had to have been defective as he wasn't dead yet.


You're probably thinking of Stanislav Petrov: https://en.wikipedia.org/wiki/Stanislav_Petrov


Yes and thanks for linking. I re-read it and according to the article he didn't actually have the power to press the button, but not immediately reporting to his superiors who had the "button" had the same impact and my previous statement remains true about automated systems built by imperfect beings.


Humans don't think kittens are avocados with complete certainty because someone flicked a speck of green paint on the kitten.


Before stating this with complete certainty, I would ask Oliver Sacks first.

https://en.wikipedia.org/wiki/The_Man_Who_Mistook_His_Wife_f...


His patients realized something was wrong with their perception, unlike a computer.


Some of them did!



The biggest problem with “human intelligence” is that our memory is so short and influenced by emotion. Though judging by the amount of data created and lost over the last decade, that’s probably true of computers too.


But is emotion bad? Emotions can be thought of as programmed biases, and we might have evolved them because they are collectively good for the species. Emotion is what caused many of the examples stated in this thread where a human made a better decision than a computer.


I don't think emotion on its own is bad or good; but it's not something that a computer can emulate. You may get better decisions in some scenarios with certain stakeholders, and worse decisions in other scenarios.

When we talk about "programmed bias", we need to be careful because different subsets of the population hold different bias -- so the bias programmed in is reflective of the experiences of the team that developed the software rather than the population of users it affects.


In the same vein, a machine should always be geared towards being a tool put into the use of the human intelligence. This tool was designed by human intelligence after all. The problem is when human intelligence creates such a tool that becomes so complex it no longer understands how it works, what it does and how to stop it.

There's no doubt that more things will be managed by machines and artificial intelligence but let's not make human intelligence a second class citizen.


I think what will happen is that a few people at the top will manage the machines and benefit from them while they will be trying to make the other humans second class. I think a scenario like “Elysium” is very possible. That is until there is a revolution and then nobody knows how that will end.


Without some robust general intelligence in the loop, whether that be AGI or human, computers will always have countless strange, unpredictable, and devastating failure modes, especially if given increasing responsibility.

This is the reason why few systems of significant complexity and risk exist today without humans in the loop.


>computers will always have countless strange, unpredictable, and devastating failure modes, especially if given increasing responsibility.

Sounds like most people I know. We are much better at reflecting on them though.


>I don't think it takes a math genius to see how this is a bad idea.

I agree, it has long been a key trope of sci-fi;

>We've crashed her gates disguised as an audit and three subpoenas, but her defenses are specifically geared to cope with that kind of official intrusion. Her most sophisticated ice is structured to fend off warrants, writs, subpoenas. When we breached the first gate, the bulk of her data vanished behind core-command ice, these walls we see as leagues of corridor, mazes of shadow. Five separate landlines spurted May Day signals to law firms, but the virus had already taken over the parameter ice. The glitch systems gobble the distress calls as our mimetic subprograms scan anything that hasn't been blanked by core command.

>The Russian program lifts a Tokyo number from the unscreened data, choosing it for frequency of calls, average length of calls, the speed with which Chrome returned those calls.

>"Okay," says Bobby, "we're an incoming scrambler call from a pal of hers in Japan. That should help."

from Burning Chrome, by William Gibson.

You could even argue that it is one of science fiction's founding tropes, as it can be traced back to the stories regarding golem.


Different roles. Computational contracts are not about e.g. the "traders" on a security exchange, they're about the exchange institution itself, and the precise definition of how it should work and interact with agents. Behind any sort of "smart market" there's a computational contract of some kind, and the converse may also be true - inasmuch as it interacts with multiple agents who jointly determine the 'prices' involved, a "computational contract" is a smart market, and its usefulness can fairly be assessed as such.


I find Stephen Wolfram to be a an interesting person. On one hand, he is undeniably exceptional and created an impressive computational system. On the other hand, the "Wolfram language" is really a pretty poor design as far as programming languages go, and would not even be noticed if it wasn't for the giant standard library that gets shipped with it, called Mathematica. I use the "Wolfram language" because I have to, not because I want to.

In other words, if I could easily use the Mathematica library from Clojure, I wouldn't give the Wolfram Language a second glance. I can't think of even a single language aspect that would tempt me to use the Wolfram Language [context: I've been a Mathematica user since 1993] over Clojure. I have (much) better data structures, a consistent library for dealing with them, transducers, core.async, good support for parallel computation, pattern matching through core.match, and finally a decent programming interface with a REPL, which I can use from an editor that does paren-matching for me (ever tried to debug a complex Mathematica expression with mismatched parens/brackets/braces?).

This is why the man is a contradiction: his thoughts are undeniably interesting, but his focus and anchoring in the "Wolfram Language" is jarring.


It's quite clear what he is saying. Programming languages are what programmers use. And how programmers think is dominated by how a computer works, which is usually a gigantic distraction to actually solving real world problems.

Newton, Maxwell and Einstein didn't need to waste any of their time thinking about how to use a computer to solve the problems they worked on.

If I ask Google for the 2018 Wimbledon Champ it tells me it has found 47,00,000 results in (0.90 seconds). Take a step back and think about this.

They have their own knowledge graph. They have Wikipedia access. They have the ATP site cached. They have the Wimbledon site cached. But they aren't able to tell the problem being solved doesn't need 4.7 Million results.

This is the kind of mindlessness that happens when the focus is not on the actual problem, but what the computer can do.


"Newton, Maxwell and Einstein didn't need to waste any of their time thinking about how to use a computer to solve the problems they worked on."

Actually a lost bit of maths history is that computation ability was really important, it was just really manual. Think about Gauss doing least squares to find the orbital parameters of Ceres by hand! Then the stories about him being able to multiply large numbers in his head start to make a bit more sense, not just as a parlor trick.


I think you make a great point, but that’s a bad example. Google shows the champions in response to that query, separate from the results. They show the results as well because of some combination of inertia and users wanting more info.


>Newton, Maxwell and Einstein didn't need to waste any of their time thinking about how to use a computer to solve the problems they worked on.

The cursor for the slide rule was invented by Newton.


"25. 20 points for naming something after yourself. (E.g., talking about the "The Evans Field Equation" when your name happens to be Evans.)"

http://math.ucr.edu/home/baez/crackpot.html


His company is named after his last name (as many companies are) and the product names start with the company name.

This has nothing to do with "The Evans Field Equation"


> On the other hand, the "Wolfram language" is really a pretty poor design as far as programming languages go, and would not even be noticed if it wasn't for the giant standard library that gets shipped with it, called Mathematica.

In the official documentation (as well as his blog posts), the term “Wolfram Language” is used to refer to the combination of the two: both the syntax and the huge standard library, because you almost always encounter one using the other. This seems pretty common to me — I’ve said things like “I used the Ruby language” when I’m also using its built-in modules.

I agree with you in saying that the language part of the Language is nothing particularly special, though.


I have interacted much with both Mathematica and Clojure, and the only thing I wish Mathematica had is a coder-oriented editor, like, for example, Cursive in Clojure world. It would become a truly powerful thing. On the quirks of Wolfram language - I personally can live with them, it is good enough considering the requirement to keep Mathematica's historical and huge lib compatibility. However current Mathematica's approach to notebook editing is just a horrible PowerPoint-like mutation.. at least for a coder like myself. I'd even love to help making it better if I had a chance to.


Funny, because I think the Wolfram language (also as a long-time user) is actually really well-designed and consistent, and has stood the test of time. Even ignoring the standard library, it has a really expressive syntax for pattern matching and functional programming, and I find that you can often do very complex things with very little code. For doing any kind of symbolic manipulation or lazy evaluation, it's pretty hard to beat.

On the other hand, I feel that his theory of cellular automata as some fundamental underpinning to mathematics is misguided.


Really? I found that it's fine when all your data is lists or matrices, but it becomes increasingly terrible if you want to process structured data. Maps (associative) have been added late and in a half-baked way, using them is a pain. The way macros are written is pretty bad syntax-wise. There seems to be little consistency in terms of which arguments come first, so composing functions into pipelines can be awkward. Variables in "With" and "Module" cannot refer to earlier bindings, so I end up with multiple nested With/Module statements. Also, "Module" for lexical scoping, really?

Also, I never managed to achieve any reasonable degree of code reuse in Mathematica. Most of my notebooks are single-purpose.


> I find Stephen Wolfram to be a an interesting person. On one hand, he is undeniably exceptional and created an impressive computational system. On the other hand, the "Wolfram language" is really a pretty poor design as far as programming languages go, and would not even be noticed if it wasn't for the giant standard library that gets shipped with it, called Mathematica. I use the "Wolfram language" because I have to, not because I want to.

He is a bona fide genius. But it's difficult to tell if his jarring references to the the 'Wolfram Language' and 'Wolfram Alpha' are simple, cynical selling, or if his vanity has blinded him into thinking the 'Wolfram Language' is a notable accomplishment on par with the other useful work he's done with physics, cellular automata and Mathematica.

In general, I am a conflicted fan. By many accounts he's an unpleasant person, and having read _A New Kind of Science_ his Principle of Computational Equivalance is real hand-wavy and not terribly rigorous.

And yet whenever he's mentioned in an HN story I always need to read it.


I have worked with a small number of exceptionally talented individuals (I don't know where to draw the "genius" line, but they were up there), and I see negative aspects of their personalities echoed in Stephen Wolfram. It looks like ego to me. People lose their humility after being told that their better than everyone else for their whole lives.

Being an expert at one thing doesn't make you an expert at everything. But it's hard to realize that when you're king of your world.

A different perspective. Highly creative (smart) people are those who make mental connections that others do not see but which exist. Crazy people are those who make mental connections that others do not see and which don't exist. Perhaps he sees the Wolfram language is superior in some way that doesn't actually exist, or perhaps we're not seeing something that does.


That last perspective is the most important one. I've known a few genius-ish people, and I recently had a jarring experience where one of them who's a bit older and retired, but definitely not mentally ill or demented, had gone off the cliff of believing in all kinds of crazy alien/government conspiracy theories and started working on some free-energy device, etc.

At first I found it shocking that he was being so completely irrational about these things and had become a "true believer" in so much crazy so quickly. But eventually I made the connection. All the innovative stuff he did earlier in life was no different. Every great idea he came up with and pursued with dogged passion was something that everyone else around him thought was stupid and crazy at the time.

Basically he's always been "crazy" in this sense, it's just that sometimes that ability to suspend disbelief and rationality works out well and you invent something useful that nobody else would've tried, and sometimes (probably many times!) it doesn't. His brain hasn't changed how it works, it just happened to latch onto the wrong thing this time.


Linus Pauling was a bona fide genius; he won a Nobel for his pioneering research in to chemical bonds, made important contributions to our understanding of crystalline structures and various proteins, helped explain the the molecular genetic cause of sickle cell anemia, etc, etc. He was a giant of science and will always be remembered as such.

But he also spent the last thirty years of his career refining and attempting to popularize a model for the structure of the nucleus (the "close-packed spheron model") that never seems to have gained much acceptance. He spent even longer advocating for megavitamin therapy, which is now generally regarded as quackery, and went to his grave pushing the idea that large doses of vitamin C can help cure cancer, despite numerous studies failing to support this.

I'm not saying that Wolfram is exactly like Pauling (although his fascination with his own work on cellular automata certainly seems to share some similarities to Pauling's fascination with the spheron cluster model). But I can't help but wonder if Pauling's reputation would be different if he had had a blog.


He also put the equivalent of a scientific hit out on Shechtman for quasicrystals


Classic: "A Rare Blend of Monster Raving Egomania and Utter Batshit Insanity" - http://bactra.org/reviews/wolfram/


This is arguably the best distinction I've heard described between "mad" and "genius".


I think your last paragraph might be spot-on.


He's that way because he's a narcissist, not because he's a genius. Those kinds of people have difficulty stepping outside themselves and lack the ability to situate themselves in a world of different opinions. I imagine he's perpetually confused why anyone would use a different language.


Not directly related to the article, but going through the comments, I felt the vibes and wanted to express my theory about what it is that irritates many of the commenters here

The highest pursuit is the search for truth. This includes the challenging act of discarding things that we'd really like to be true, but are false. The success of science can be attributed to extreme selfless intellectual honesty. The more ego gets in the way, the more the truth is compromised. Intelligence can be used in service of finding truth or in feeding our delusions. My view on the distinctions made here between genius and madness are that they correspond to the degree in which intelligence serves truth or delusions. Therefore I'd expect the most outstanding scientists in history were also very humble (perhaps someone with better knowledge on the personality of great scientists can shed more light on this).

And to keep things consistent - I might be wrong and thus welcome challenges to this theory :)


While I definitely agree with you that this is the general pattern in life, the thing is that people aren't always either humble or arrogant, and it is possible to see the truth while being arrogant too. Even Wolfram must have days, or hours, maybe minutes - let's say seconds - when he's humble. And sometimes he also sees the truth while being arrogant.

Therefor it's a form of arrogance to dismiss (generally) arrogant people as always wrong. We have to be selective or we might miss something important they've seen.

Wolfram is a combination of irritating ego and inspiration.


You are absolutely correct and I even deleted an entire section in my original post how things are never just black or white but it got too long. Thanks for bringing it up.


Newton was notoriously not humble. He's not the only one.


Interesting, thank you - I'll do some research.


I had this thought when I was 13 and first learned the basics of programming "hey dad if everything is determined by physics and chemistry and computers are real good at math can't we analytically deduce the future."

I eventually learned this was just the 13 year old's version of "why's the sky blue?" into "but why is Rayleigh scattering a thing?" and there are several limitations to both human understanding of science, and theoretical computation limits -- a computer to have sufficient accuracy to model the world would by definition need to have as much memory as the world, and model itself in it. I moved on from that idea shortly after learning of that.

Is Stephen Wolfram just an overgrown child? Maybe unironically that's what being a genius is about.


> a computer to have sufficient accuracy to model the world would by definition need to have as much memory as the world, and model itself in it.

But what if we had certain "compression" abilities that allowed us to simplify the world? Similar to how storing an audio codec lets us massively minimize how much storage we need for music?


Magical compression algorithms that have amazing compression ratios and amazingly computational cost to reasonably decompress to process the data could theoretically raise the upper bound of a computational world. But compression cannot solve for an unbounded recursion problem in the simulator having to model itself it's model of the world, unless we arrived at an analytical closed-form solution for any prediction rather than relying on any modeling.


Or use lossy compression and get quantization noise. The model would probably get pretty weird if you looked at a fine enough detail. ;)


He did pay nice respects to Solomon Golomb (whose death anniversary was two days ago) so I’ll give him that.

https://blog.stephenwolfram.com/2016/05/solomon-golomb-19322...


To be honest I haven’t understood much of this.

Am I also the only one who’s very skeptical of AI? I see no correlation between what we call “biological thinking” and computation.

Even though I don’t know much of the theory behind AI, to me it’s similar to saying that since we have lots of simple calculators, we can arrange them together in some specific way and emergent intelligence will arise.

Sure yes I mean but you can say that about everything: let’s arrange a bunch of forks together and intelligence might emerge.

And actually from a math point of view you could luckily arrange some forks together and have intelligence since intelligence seems to emerge from an arrangement of atoms.

I don’t see why computation has to get a go at intelligent, while anything else not. What’s so unique about computation?


The last 80 years have had a string of successes for arranging simple calculators in ways that produce impressive results. Google search, world champion chess players, Fortnite are all examples.

I don't know of any impressive results with arranging forks. If there were, you could model the fork behavior on computers and probably run it 1000000x faster.

It's certainly possible that some other elements than the simple calculators we use today will lead to the big breakthrough in AI. Perhaps quantum computation is needed. But right now, arrangements of simple calculators seem like the most promising.


> The last 80 years have had a string of successes for arranging simple calculators in ways that produce impressive results.

I view that as biological intelligence figuring out all sorts of clever ways to make calculating tools produce impressive results.

> Perhaps quantum computation is needed.

But why invoke physics if we're comparing biological intelligence to our computing tools? Isn't it enough to note the big differences? And here I'm not talking so much brain function as I am being a living, social animal that has to survive with a vast human culture.

This is the difference between looking at computers as as stepping stones to artificial replacements as opposed to enhancing our abilities. The AI stuff captures the imagination, promises fully automated utopias, scares us with apocalyptic scenarios, and is the stuff of lots of SciFi. While the reality is that computers have always been tools aiding human intelligence.

For some reason we view these tools as one day being like Pinocchio. The mythos of the movie AI is based on that vision where the robot boy becomes obsessed with a the blue fairy turning him into a real boy so his adoptive biological mother will love him and take him back.

But maybe like in the movie, someday the sentient robots will show up and gave our robot boy his wish.


Look at it this way: programs are a list of procedural rules, combined with input to yield output. Computers are physical objects that take that input, follow the rules (the program), and produce the output.

Humans (via intelligence) can also follow these rules, and produce the same output for a given input. So if some arrangement of forks gave rise to intelligence, then they would necessarily be a computer.

The central question is whether intelligence is something over and above computation. Coming down on this would be require either: (a) an example of something intelligence can do that a computer (with the right program) cannot, or (b) some proof as to why no such example can exist.

I'm not confident we have the right philosophical tools to even approach this question right now. What does it mean for human intelligence to do a thing that a computer cannot? Say you do X -- I might just ask you to write down how you did it. If you are precise enough in how you did it, you've just given me a program.

Alternatively, even if you can't write it down, it seems plausible we could (in theory) write down the rules of physics. Then, given as input the state of your brain, we might iterate those rules forward and produce the output.

Going a bit off track here, but it seems to me that the only way intelligence could not be computed would be if intelligence was in some way non-physical -- and more, causal (so, not just epiphenomenal), it would actually have to be useful.


The unique part about computation is that we can throw an immense amount of energy at specific problems in isolation to force rapid evolution, and backtrack to snapshots easily if we don’t like where we ended up.

You can select for an intelligent swarm of fruit-flies, but it’s going to take you a lot longer to get your result than if you could simulate the whole process. Now, if it’s simulated, will you be able to walk your final result back to meat-world DNA? Not unless you included that level of detail in your simulation, which would be extraordinarily expensive to do and fraught with problems due to the scale. But even then, the simulation gives you some advantages: affect the flow of time, backtrack in time to explore other paths under alternate simulated conditions, etc. In the real world, you can’t have that kind of perfect reproducibility.


Universal computation is about information processing, and information processing is definitely an important thing that intelligent agents are doing.

(I wrote universal computation - to mske a distinction to common physical processes like what forks usually do - we may obviously say that a glass of water computes in a second where water molecules of that water will be in a second, but that is too direct and missing the abstraction level that my mind has (sensing and modelling my environment) or a universal turing machine has (manipulating symbols which may stand for(modell) something)


We don't know that human thought actually can run on a Turing Machine though.


A little late on this, but...

> Am I also the only one who’s very skeptical of AI? I see no correlation between what we call “biological thinking” and computation.

I think you're confusing several things together here, possibly (partly?) because:

> Even though I don’t know much of the theory behind AI,

I'll try to give you (to the best of my understanding, which may be faulty, so be aware of that) a "10000 foot understanding" - and hopefully (not likely) I won't drone on and bore you to death.

So we have here three things: Something we call "AI", something called "biological thinking" (I'm not exactly sure what you mean by this - but I'll assume for now you mean the process of "though" humans do with their brains), and "computation".

Of the three, computation is probably the simplest to define: It's what happens when a program - a set of rules - is run and followed. Usually there is some "input" or data that flows into these sets of rules, and at the end of the running of the program or process, there is an output. It is possible that the program may not "halt" (ie, stop); it may loop forever, and output something forever. Or it might hit a condition that does cause it to stop. Regardless, that set of rules being run (whether those rules are those of law, those of mathematics, a recipe to bake a cake, etc) is what causes computation.

I won't go into it deeply, but what Wolfram believes (whether he is right or not is for others to decide) is that there are simple programs - or sets of rules - throughout nature, and that these programs are executed due to natural processes, and that from these programs running, and thus performing computation, their output is what we see in our world. Furthermore, according to Wolfram, the computation being done is equivalent throughout nature; whether it is done on the substrate of silicon (ie, a computer), or the substrate of physics in a stream computing eddies and whirlpools...

Ok - enough of that; let's move on. Biological Thinking...

As I noted, I think you are simply referring to the process that humans (and perhaps other creatures) using their brains to take in information via their senses, do some "processing" on it, and then output something (perhaps speech, or moving around, etc). Or maybe output nothing at all; just "thinking" or "ruminating" about something.

Do we understand how this works? No. We are studying it, and making fits and starts of progress, but it's been very slow going. We don't know how to define certain things, or even if they are definable, or if they are simply fictions of the brain; some things are - for instance, there have been studies that show that you make a decision to do something before you are consciously aware of making the decision to do something, and your brain makes up a story after the fact to explain it. There are many, many other weird things like this that our brains do. We don't know why. We also don't know how they play into or alter our perception of "free will" - of if that is even a real thing.

One thing we do know - to a certain level - is how brains actually "work". I won't go into details - but yes, neurons, action potentials, spiking, etc. We think that certain other chemical things may be at play, among other theories and ideas - we not completely sure it's all "electrical" (and actually at the synapse level, it's chemical - well, ion exchange and such - which is kinda electrical, but kinda not, too). But we know electricity plays a large role in how our brains work, how neurons work, and how they communicate with each other.

We also know they work together in networks, seemingly jumbled and disorganized at the cellular level, but not completely, and less so at higher levels.

So in short, we believe that "biological thinking" occurs due to networks of neurons communicating using mostly electrical activity, coupled with various chemical and other activity.

It would be akin to looking at a factory and only being able to understand how the individual parts of each machine work together, and maybe certain subsystems of those machines, but still have no understanding of how the factory produces what it does, but believing it had to be in a large part because of those individual parts working together.

Forests and trees, as you can certainly tell...

...part 2 follows...


Part 2:

So lastly, AI. Something to know off hand is that AI encompasses many things historically, but if we are talking about today, then AI is basically focused on two areas:

* Machine Learning

* Neural Networks (Deep Learning)

Machine Learning can be thought of more as "applied statistics" - although that is a gross simplification. But, it is somewhat accurate, in that in statistics there are various known methods and algorithms that can be "trained" using a set of data, and that training, once completed, can allow these algorithms to decide on an output based on completely new data fed into them. Most of these algorithms and other methods can only be easily fed a few data points, and can only output either a "continuous function" (if you will); that is, a usually floating point number that represents some needed output (ie, if the method were trained on two data points of current temperature and current humidity, it might output a value representing the speed of a fan) - or they can output 2 or more (but usually only a few) "categories" to indicate that the input means particular things (taking our earlier example, maybe instead of fan speed, it would output whether to turn a fan on or off, or whether to open or close a window).

Neural networks, on the other hand, work closer to how "biological thinking" (well, more like the neurons and the networks they form) actually works. Basically, you have a bunch of different nodes, arranged in layers; think of a simple three-layer network...

The first layer would be considered the "input layer" - it could consist of a few nodes, to hundreds of thousands or more, depending on what is being input into the network. The middle layer is usually much, much bigger than the input layer. It is also termed "hidden", in that it isn't directly interacted with. Each individual "input node" in the first layer is connected to every individual neuron in the hidden layer. So, input node 0 is connected to hidden layer neurons 0...n, and input node 1 is connected to hidden layer neurons 0...n and input node i is connected to hidden layer neurons 0...n.

As you can see, it's a very tangled, but organized web that forms between those first two layers.

The third layer is similar, except it is formed of a scant few neurons; it's known as the output layer (a brief note here - it is possible to have more than one hidden layer of neurons, but mathematically, from what I understand, multiple layers are no different than one large single middle layer - that's just my understanding).

The output layer could have only a single neuron; it's value could again be that "continuous function" I mentioned earlier. Or it could be multiple neurons, each representing a "classification" of some sort; thus, the layer could have anything from one neuron to potentially hundreds, depending on what you are training the neural network for.

Let's say your training a network to take a simplified image of a road, and transform that into an output to be fed into the steering system of a vehicle. Your input node layer might consist of say 10000 nodes (for a 100 x 100 b/w pixel image). Your middle layer might consist of 10x that number of neurons, and your output layer could consist of a single neuron (outputting a continuous function representing the steering wheel angle) or a set of discrete neurons representing a class of various steering wheel positions (hard left, soft left, left, straight ahead, right, soft right, hard right).

You'd train this network on a variety of images, and it would output (hopefully) the correct answer for driving the vehicle around based on images and what was done in response. Essentially, the images it is given to train on would consist of individual black and white images of a roadway, and what should happen at that point (keep going straight, or turn in some manner). If, during training, it does the wrong thing, an error amount is calculated, and that is used to update numbers within the neurons that make up each layer (output and hidden - there are no neurons in the first layer), to make their calculation the next time more accurate. This process is called "back-propagation".

Interestingly, at a very base level, the computations being performed by a neural network, aren't much different than those from "classical machine learning" algorithms, but because their inner workings, which for the most part are fairly opaque to study, are extremely complex, they allow for things which the classical algorithms couldn't touch, namely the ability to feed in very large numbers of inputs, and get back out very large numbers of outputs.

Given enough "labeled" training examples, this process works extremely well, but for there to be usefulness for a variety of tasks, such networks need to be composed of hundreds of thousands to millions of neurons, each connected to each layer above and below, and it take immense amount of computational hardware (in the form of GPUs, usually) and power to do so. It also needs a boatload of training examples. All of these needs is why neural networks, despite being played with in various ways for well over 60 years, didn't really take off with promising and useful results until very recently, when the amount of data to be had, and the computational ability to process that vast amount of data became available (again, GPUs). Basically a perfect storm. This is all known as "deep learning".

Now, I'm going to leave you with something to ponder about:

We are doing something wrong. For one thing, as far as anyone has been able to determine, the idea and workings of "back-propagation" has no biological analog. Back-propagation is something that only happens in the realm of these artificial neural networks, and does not occur (as far as we know) in a natural neural network.

It is also very, very computationally intense - ie, it sucks a lot of power. We haven't even scratched the surface of building a very large scale artificial neural network, and already what we currently have takes a ton of power, well beyond the very meager power consumption of a single human brain.

It should be noted, if it wasn't apparent earlier, that the model of the neurons used in an ANN (artificial neural network), is grossly simplified to the actual workings of real neurons. There are studies and systems out there that seek to implement and study ANNs using models which more closely represent how actual neurons work (ie - spike trains, things that happen at the synapse level, etc) - but they take even greater amounts of power to run, and can't be used for much more than research.

You can take this and still be "skeptical of AI", but that would also dismiss the vast strides we have made in the last decade or two in the AI/ML field. I also hope I've been able to show or allude as to where/how there is correlation between AI (well, the ANN part of it) and "biological thinking".

I hope this helps in some manner to explain things and to lessen the confusion as to what everything is and what it means...



Before I clicked, I expected it to be this one: https://xkcd.com/419/


I watch his live CEOing youtube occasionally. Is quite interesting, but at some point I feel like I am stuck in a work meeting, with no real objections (understandably) being posed to Wolframs proposals.

I also think the word computation is used a bit too grandiosely by Wolfram. Which is evidenced in the writing here.

I do admire Wolfram for even advertising his CEOing meetings though. He goes into the detail to a level you would not necessarily think a CEO would do. Credit where credit is due, he is not shy. Many CEO's would not bother.

I mean, technically, everything is a computation, but we accredit actually complex things to that term, not everything. To use it "whilly-nilly" deflates the words impact.


The idea of small rules leading to large patterns is not something Wolfram invented, but he seems to think he did. I think Conway's game of life predates it, as does Mandelbrot, then there is the Durer pentagon.

Maybe there is a something I am missing though?


From what I have heard, what you're missing is Wolfram's ego. That's why he magnifies the importance of what he has done.


I mean, Mathematica is a truly great, great piece of software, but his "New Kind of Science" was both utterly worthless and a fantastic treatise on how a person's grandiose religion of self leads to utter delusion.


I started to read it when I college, and read hundreds of pages into it before I started to realize that there was nothing there. The entire book can be summed up as "complexity arises from multiple iterations on simplicity" and, like, you can teach that to a middle schooler in a day, no need for a thousands of pages-long tome.


It was not worthless. It made a great doorstop.


On that note... Is he really claiming to have invented the Church-Turing Thesis years after their deaths?


So he's comparing his eponymous programming language to the arrival of mathematical notation 400 or so years ago, and the advent of written language.


Yep. That kind of thing is pretty much par for the course for Wolfram.


perhaps wolfram is parodying his main idea in his work output. from a basic origin he's creating monstrous bloat and complexity, as a type of demonstrative self-referential proof.

there are people who write code until it works, and people who rewrite until it doesn't. room enough for both.


Excellent observation. Perhaps it’s a twist on Conway’s Law?


If anyone is interested, Wolfram does what he calls “Live CEOing”, and streams his sprints on YouTube: https://www.youtube.com/user/WolframResearch


These sessions are streamed live on Twitch, too.

These sessions are vastly interesting. I think many commenters here should listen-in sometime.

The Wolfram Language is hard because mapping knowledge is hard.

What many people here call "ego" is one guy, and his talented team, tackling an enormous problem nobody has licked. So far.


I love some optimistic view of software like I had when I first started into computers.


Think of all the people in prison, justly or unjustly, under our current system where humans have to interpret and carry out our legal code. We may extend these practices into every domain at negligible cost. Why do we desire authority without conscience? I can think of two reasons: one, we do not wish to hold ourselves accountable; two, we desire to see the strong afflict the weak.


Did he claim to invent computational irreducibility? Wasn't that Goedel or Turing or someone like that?


He claims it often. Yes, it was Turing, mostly based on previous work.

The worst part is that he misses the detail it being possible to describe all of Math as computation (so they are perfectly equivalent) and creates entire books with keen observations of how much of Math one can describe as computers.


Wolfram might be a genius, but I personally find his writing and lectures extremely dull. The only thing he seems to write is how everything is "computational", and how we need to embrace his idea that computing is what the whole universe is about. For example in this blog post alone, we can find this following word count:

  computer - 19x
  computation* - 96x
  computational - 63x
Make everything computational!

  computational intelligence - 2x
  computational contracts - 7x
  computational universe - 7x
  computational language - 18x
  computational thinking - 2x
  computational fact - 1x
  computational acts - 1x
  computational equivalence - 8x
  computational irreducibility - 6x
  computational system - 1x
  computational process - 1x
  computational work - 1x
  computational essays - 2x
  computational law - 1x
Throwing all these words around may sound smart but they lose any meaning or relevance that they were supposed to deliver if being overused in such a larger-than-life manner.

</rant>


Welcome to Stephen Wolfram. Wolfram is madly in love with his own ideas, even when those ideas are not nearly as original or world-shakingly profound as he seems to think they are. That he is a genius does not mean he is not in a state of arrested development; prodigious genius like his can impede one's development because people around the genius are so in awe of them that they are unwilling to give them the much-needed corrections and reality checks it takes to grow.


It's very hard to teach someone whose cup is full. Very smart people can rationalize their bad ideas faster than other people can talk them out of them. But that doesn't mean you've actually explained yourself. If you can't explain things to other people, do you really understand it yourself?

At this point Wolfram is lost to us. "What the hell are you talking about, Steve?". I was just fantasizing about resurrecting Richard Feynman and having him ask Wolfram this question but it turns out I don't have to:

In a letter from Richard Feynman to Stephen Wolfram:

> You don’t understand "ordinary people." To you they are "stupid fools" - so you will not tolerate them or treat their foibles with tolerance or patience - but will drive yourself wild (or they will drive you wild) trying to deal with them in an effective way.

> Find a way to do your research with as little contact with non-technical people as possible, with one exception, fall madly in love! That is my advice, my friend.


> fall madly in love!

And he took that advice with a ferocious passion that not even Feynman could have foreseen. Stephen Wolfram is madly in love with Stephen Wolfram!


> If you can't explain things to other people, do you really understand it yourself?

I hear this said every now and then and I don’t think I agree with it.

A lot of knowledge builds on other knowledge. And the other knowledge might be required in order to understand the knowledge that one wishes to share.

So then either you have to explain it all, or you will have to simplify dramatically.

Few people will want the full explanation — it would take much more time to explain than they would ever want to spend listening. And I am not saying that to criticize those people. The same goes for myself — there are very many things I wish I could understand deeply. But there simply is not enough time. Specialization is necessary. We must pick the few things that interest or have the most use to us, if we wish to have a chance to truly understand any topic deeply at all. And even in the topics we choose there will be a lot of subtopics that we won’t ever have time to understand as fully as we would desire.

Even if they did want to listen the amount of information could be so huge that it might take several years to explain all of it. And so it can’t be done.

So instead we have to simplify. A lot. Unfortunately, when we simplify we often end up skipping crucial information and might even communicate the knowledge wrongly. In the cases where at least we don’t communicate it wrongly there is still a significant risk I think that one or several of

- the truth,

- the value, and/or

- the utility

of the knowledge we try to share is not possible to see without a lot of context that would take too much time to explain.

I have seen time and time again other people jump from

- the fact that someone was unable to explain something in a satisfying way

to

- concluding that the person that tried to explain it doesn’t himself/herself understand it

When really I think the real conclusion should instead be that it is impossible to determine at that present time whether what is being said is correct. But that neither means that the other person doesn’t know, nor that they do know!

I think the fundamental problem is that we keep insisting that things should be simple when the reality of the matter is that they just aren’t. Or perhaps there is some simpler answer but the only path known by the person to understanding the knowledge they possess is based on the knowledge they learned before they learned the knowledge in question. Then it could be possible for that person to boil it down to something that does indeed both hold truth and is simple to explain. And perhaps that it is what it means to understand something at the deepest possible level — to have the explanation worked out so that it builds on the fewest and most simple prior knowledge required. But to sit down and make those simplifications could also take years. And unless it is the job or the desire of the person holding the knowledge to spend all of that time they could spend on other things just to work out the simplest possible explanation of the knowledge then it won’t make sense for them to do so.

There are a few things more I would like to say about my thoughts on this too but I think this comment of mine has probably turned into a big wall of text already. So instead I will conclude with saying that I think that the ability of one person to explain something to someone else

- relies on their own understanding yes

but

- it relies to a much greater extent on the amount of knowledge that both parties already share

and

- a lot of things in daily life builds on things we all know already, and does not extend it too much, so it is quick to explain and simple to understand

but

- some knowledge is so far removed from shared knowledge, at least by the path to it known by the person trying to explain it, that it is for all practical purposes impossible for that person to explain to the other


You highlight a common fallacy that is repeated across the internet: "If you can't explain to me why I'm wrong, then I must be right," or something similar. But as you point out, this assumes that an explanation can be written in just a few paragraphs, which usually isn't the case. It takes years of study and experience to become an expert in a given field, and as a result, it becomes too difficult to accurately convey the necessary information to get a point across.


That reminds me of the movie "Bohemian Rhapsody" where Freddy Mercury splits off from Queen to write music independently.

But he returns saying:

"I went to Munich. I hired a bunch of guys. I told them exactly what I wanted them to do... and the problem was... they did it. No pushback from Roger. None of your rewrites. None of his funny looks. I need you. And you need me."


You might have seen his table analogy for abstractions, where the concept of a table was an abstraction defined by a set of nouns and adjectives. Yes, "make everything computational" because we all have not yet come to understand the abstract world that he is foretelling as well as we understand the word table.

For another analogy, just because you frame a house with one material e.g. wood, doesnt mean that its structurally unsound.


I do hope somebody who can write is reading his stuff carefully or at least insightfully. He seems like a great character to inhabit a science-fiction story. It would be a pity not to see him there.


That this ad hominem attack is the top voted really illustrates the deterioration of the community here.

You've added nothing of value.


I mean, did you either?


All the bad people need is for the good people to do nothing.


As much as I wish Wolfram would dial back the egocentric tone in places, this is overall an excellent read


I wish he'd cite sources rather than implying by omission that everything is his idea. There's a reason A New Kind of Science was self-published.


I borrowed it once from a friend and after reading a few chapters felt that it was mostly a chapter of smoke and mirrors surrounded by something that was already known and could be explained far more succinctly. The Amazon reviews showed that others felt the same.

Wolfram is brilliant, but I'd rather read a book from him that showed him solving all sorts of neat problems.


Bought the book some years ago, but never really got through it. Keeping it handy though in case I need to knock out a burglar. Thing is massive, shipping was half the cost ;)


And the same reason I returned it to Barnes & Noble within two days.


> And, yes, there’s only basically one full computational language that exists in the world today—and it’s the one I’ve spent the past three decades building—the Wolfram Language.

:/


If he means, from a mathematical point of view, Mathematica's ability to both symbolically and numerically solve advanced mathematical systems of equations, he's not technically wrong, even if his oversized ego oozes out of every word he utters.

If, however, he's talking about general computational systems vis a vis creating programs to run on today's microprocessors, he has obviously drunk too much of his own kool-aid.


> ability to both symbolically and numerically solve advanced mathematical systems of equations.

I mean, python, matlab, julia, octave, sage, and maple would all fit that definition I think. I do think Mathematica's CAS is the best in the business but not the only player for sure.

I'm sure Steven has something in mind that sets Wolfram Language apart, just not sure what that is.


[flagged]


What's really at it again is the “Wolfram and his ego...” comment. If it's so static then we don't need to mention it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: