Hacker News new | past | comments | ask | show | jobs | submit login
The Unparalleled Genius of John von Neumann (medium.com/cantors-paradise)
636 points by jorgenveisdal on Nov 15, 2019 | hide | past | favorite | 319 comments

"Von Neumann would carry on a conversation with my 3-year-old son, and the two of them would talk as equals, and I sometimes wondered if he used the same principle when he talked to the rest of us." - Edward Teller

See [0] for a demonstration.

I watched a documentary from the 80ies a long time ago. A mathematician (can't remember his name) who worked with von Neumann in Los Alamanos was interviewed. He described von Neumann's last weeks in the hospital - the cancer had already metastasized into his brain. The mathematician said something along this lines (I am citing from memory): "von Neumann was constantly visited by colleagues, who wanted to discuss their latest work with him. He tried to keep up, struggling, like in old times. But he couldn't. Try to imagine having one of the greatest minds maybe in the history of mankind. And then try to imagine losing this gift. I was terrible. I have never seen a man experience greater suffering."

Marina von Neumann (his daughter) later wrote this about his final weeks:

"After only a few minutes, my father made what seemed to be a very peculiar and frightening request from a man who was widely regarded as one of the greatest - if not the greatest - mathematician of the 20th century. He wanted me to give him two numbers, like 7 and 6 or 10 and 3, and ask him to tell me their sum. For as long as I can remember, I had always known that my father's major source of self-regard, what he felt to be the very essence of his being, was his incredible mental capacity. In this late stage of his illness, he must have been aware that this capacity was deteriorating rapidly, and the panic that caused was worse than any physical pain. In demanding that I test him on these elementary sums, he was seeking reassurance that at least a small fragment of this intellectual powers remained." [1]

[0] https://www.youtube.com/watch?v=vLbllFHBQM4

[1] https://www.amazon.com/Martians-Daughter-Memoir-Marina-Whitm...

Thanks, interesting story that I will pass on to my Dad (who worked with John von Neumann at the Institute for Advanced Study when he was young, and Edward Teller hired my Dad at Berkeley and they were life long friends).

My Dad says that von Neumann liked to play his music loud and work late at night with no interruptions. I remember Edward Teller coming to a costume party at our house and my Mom and one of her friends had a white bird costume for him to wear. He refused to wear the bird’s head, just wore the white wings. The next day in Herb Cain’s newspaper column he incorrectly talked about Edward Teller wearing an angel of peace costume.

I've taken a medication that occasionally makes me feel stupid. For a day I might be unable to continue reading a book without re-reading each sentence a few times. I might not be able to talk to people about a complicated or technical subject without stumbling over my words. I will find it difficult to think in the abstract terms, and when I listen to others who talk about abstract values and concepts I have to continually relate it back to a concrete example otherwise I'm lost. My IQ is normally between 140-155. When I go through these brain-fog days I estimate it's at around 90.

It feels terrible. When I first experienced it, I was terrified that it would be permanent because that would keep me from doing my job and keeping up with my interests and hobbies. Now I only get this brain-fog every so often. I think it helps me communicate with people better and has helped me learn patience. I finally understand how some people might be genuinely, earnestly trying to understand what I'm trying to say or teach, but can't understand it because I'm communicating it at the right level of analysis.

If you're intellectually gifted (many programmers are), you shouldn't take that for granted. You got lucky, and if you weren't lucky enough to be intelligent, it might have been impossible to do the same kind of work you do today. Please appreciate that.

In many ways what you describe feels similar to having a high IQ and ADHD. Sometimes being able to dig deeper into a topic than most but sometimes entering a bit of a “mental fog” where you’re barely able to think at an average level. But instead of medication it can be “triggered” by sudden surprises (E.g. unexpected questions or situations can derail my mind for a few minutes) or environmental/nutrition changes, or even just pack of sleep.

Still it forces you to think at different levels of mental acumen and appreciate the differences people have in mental quickness. Though it’s also a challenge when people with constant “IQ” assume you don’t know a given topic if you’re in an off moment or day. Luckily the more experienced people seem to pick up on that. In the end communicating across different intellectual levels makes for a humbling challenge.

The sad reality is that cognitive decline over age is more likely than not. I have prebuilt a set of habits that I hope will serve me for when I need to rely on them to get me through the day.

I had a conversation with an elderly neurologist at a wedding on this topic. His top item of advice was to do things that put your awareness into your moving body as much as possible, as a daily habit. No surprise he and his wife were the only folks in their age bracket rocking out on the dance floor later.

> to do things that put your awareness into your moving body as much as possible

Could you explain it in simpler terms?

I think this means do things that need you to think consciously about your body coordination. So dancing, rock climbing, motorcycle Enduro racing etc

Interesting. Does that mean that mostly relying on muscle memory is not as beneficial in this context? E.g. if you're already good at some technical sport (e.g. tennis, basketball), playing it wouldn't be as beneficial as, say, learning a new dance?

Interesting. What kind of habits?

Glasses and ID together. Car keys on top of work items like laptop and badge. White board next to front door with dates and reminders and mail to drop off. Arrange things like you would for a Sims character.

Oh... yeah, those make sense. I already do a lot of those things so that I can be sure I don't forget anything.

And I'm not even "old" yet...I guess it will pay off in the long run.

I hope that regular exercise of my body and mind can minimize this.

> If you're intellectually gifted (many programmers are)

Lets not jerk each other off too hard, eh? Just like everybody in life, in any trade or profession, most programmers are just average people like everybody else. They just happened to luck out and be good at a talent that pays well.

Odds are incredibly high you aren't any smarter than somebody who paints houses for a living, fills your prescription, or plays football for the NFL. We are all just people trying to make a living -- don't ever forget that.

Accountants, pharmacists and programmers are smart. They have to be to do their job. Decorators, sports people and drivers for example can be smart but it is not required to do their job. I agree we should all respect each others talents and abilities that go beyond adding up numbers and thinking in symbols since these are boring talents anyway but let's not stretch the truth and say that everyone is equally 'smart'.

You may be using "smarter" to mean something different than IQ, but if you mean IQ then you're wrong. Different professions have different distributions of IQ, and programmers are among the professions with the highest average IQ. They definitely skew higher than house painters.


Programmers may not be wiser, kinder, or better than anyone else, but on average they are a bit smarter.

If you genuinely believe programmers have a higher iq than normal, dunning kruger would like to have a word with you.

To many engineers are way to convinced that the fact they know any kind of engineering makes them automatically smart at everything outside of their narrow domain of expertise (eg: programming).

Err, this is weird.

Programming is correlated with higher IQ.

IQ is correlated with spatial reasoning and the kinds of multi-step puzzle solving that is often needed in programming.

I don't know why you find this so surprising or offensive. It's simply that the IQ tests the kinds of things that often make good programmers.

IQ isn't the only thing correlated with programming ability, and people with high IQ are often extremely dumb in many areas.

(I don't have a particularly high IQ)

Unfortunate use of “to” or the “Krugered” programmers might have listened


I don't know if coding makes you smart, but the causal relationship might go the other way: you're more likely to be good at coding, the smarter you are.

There is a reason why a cashier works for minimum wage.

A few days ago I went to a grocery store to buy sparkling water. I got 16 small bottles and arranged them as a 4 by 4 grid before cashier, so she wouldn't need to spend time counting them and delaying the line. It took her over 10 seconds to count the bottles: apparently, she didn't recognize the pattern and counted them by groups of 3-4 bottles covering the counted ones with her hands to simplify the process. Programmers don't even need to multiply 4x4, they just see the answer. This scene hints that the cashier's analytical skills are next to none and this alone explains why she is a cashier.

Another example. There is a curious simple test to check your working memory capacity. Imagine a 3x3 grid and draw there words oil/gas/dry. Now read all the words that you see on the grid. Not all people have visual imaginations: some operate with graph-like structures and represent the grid as a set of logical statements: row1 is gas, row2 is oil, row3 is dry. This is fine as long as they do it efficiently. Most people in this task will resort to the snail analytical approach and will be thinking like: cell 2-1 is A, so cell next to the right is 2-2 belongs to word 2 which is oil and thus the second letter is I, so our current sequence is AI. Obviously it will take them ages to enumerate all the words this way. High level programmers can keep the entire grid in memory, either as an image or in symbolic form, and thus can enumerate words quickly. Why does this example matter? Programmers have to keep many objects and connections between them in memory.

If everybody had these raw cognitive skills, programmers would get the standard minimum wage. Same for accountants, lawyers, bankers.

>This scene hints that the cashier's analytical skills are >next to none and this alone explains why she is a cashier.

Her numerical analytics skills perhaps, but she might be great at something else. Maybe she is a poet, or a musician, or great at handling 5 kids, or something else that you are terrible at.

Just something that doesn't pay well, so she has to be a cashier.

This is kind of a form of Moravec's paradox: we assume what is easy for us must be easy for others, and what is hard for us must be also hard for others too.

I can't find the reference right now, but this was a famous problem in early testing of monkey's intelligence. They showed the monkey's pictures of humans and they couldn't distinguish them, so it was assumed they were fairly dumb. But then eventually someone figured out they were much better at distinguishing pictures of other monkeys.

Of course it's still possible that the cashier is not good at anything, but in my experience that's very rare. Most people have some skills.

Eh. Poetry and handling kids is orthogonal to intelligence.

Intelligence is a very narrow and specific skill of discerning the real. It allows us to predict things. Coincidentally, it allows to make money and thus is well paid. It's ok if others disagree with me.

My two examples above are meant to hint that intelligence consists of two distinct skills: ability to keep a still detailed image in your mind (the 3x3 grid example) and ability to analyze this still image (the counting bottles example). The latter builds on the former because if the image is blurry in your mind, there is nothing to analyze (you can't see words in the 3x3 grid if it keeps floating away). We can go further and divide the two skills into say 10 distinct levels of mastery, define their characteristics, but the point remains the same: it's a steep ladder that has to be climbed if one wants to get this skill.

Poetry, music and handling kids are different skills and are of no help in climbing the ladder of intelligence. No, I agree that compassion and other skills can be as useful as intelligence, but they are different skills and have to be mastered separately.

Just my 2c.

Handling kids maybe but poetry and music are only orthogonal to intelligence at ameteur level. Professional poets and musicians tend to be highly intelligent even by your definition.

Do you spend much time parenting?

> Imagine a 3x3 grid and draw there words oil/gas/dry. Now read all the words that you see on the grid.

I can't do this for shit and I'm a genius.

It took me about five minutes to even understand the task. I first pictured a 3x3 grid with a word in each box. I thought I was supposed to populate the boxes with any word from the list -- oil, gas, or dry -- and then read them back.

I guess I'm living proof that not all programmers are smart.

I bet you can still enumerate them though through a different method.

Walking home that day I was obsessed with this. I guess I did okay. Found at least one five-letter word ("grail", I remember). I can't see all the letters at the same time, so I feel like I'm cheating when I do it.

On the other hand even when I think of a three letter word I don't see the letters, I just kind of know what they are.

I can see the paper title now: "The correlation between IQ and scores and the table top game boggle".

He said programmers are likely smart, not others dumb. Jeez what's with everyone picking imaginary Internet fights everywhere

What absolute nonsense. Interestingly you refute the idea that intelligence determines someone’s “worth,” yet from the start you read that into the original comment even though the commenter said nothing to suggest that viewpoint.

Your second paragraph is wholly unrelated since the OC isnt saying that every programmer is smarter than everyone else. And your first is trite humanist garbage.

“Just happened to luck out and be good at a talent that pays well”

And, coincidentally, that “talent” (abstract thinking, clarity and extended focus of thought) “””happen””” to be highly correlated with higher IQ.

Still rats in the race, just rats with a slightly higher score on a particular dimension.

It's not nonsense. I can't stand people who post crap like "OMG I can program so I have to have a higher than average IQ". And then they link to articles that support their little circle jerk. No. Sorry. You don't. You are average. I'm average. We are all average. Just 'cause you can code doesn't mean you are suddenly god's gift to intelligence. You aren't. Really. Sorry. You are just as stupid as everybody else.

John von Neumann. Dude is a genius. Einstein. Also. you? me? sorry. Doesn't matter if you can code. People can paint cars way better than you. They can make a perfect french fry. They can inspect a clogged sewer line way better than you. They know just the best ways to start an IV. Who is more intelligent?

Just because you can do $PROFESSION doesn't make you above average in intelligence. Period. Full stop. To think any other way (and for somebody to post a link to some BS article supporting their assertion) is completely absolutely 100% arrogant BS. Get off your high horse people.

You're stunning me here. It's like you're objecting to someone saying gymnasts are coordinated or air traffic controllers can handle a lot of stress. Intelligence is a core qualification for programming, like height for basketball. You can't say "I play basketball, so I'm taller than others", but you can say "We're all basketball players, so we're likely taller than most". We're _selected_ for intelligence.

Citing Kathy Sierra’s book, Badass (pp 91–92)

> Evidence has been mounting for decades that for most non-sport domains and for most people,“natural talent” is not an absolute requirement for reaching high levels of expertise.

> Other than the sports that depend on specific physical prerequisites, few domains have hard genetic limits for expertise.There is a way in which natural ability might contribute to high expertise in non-athletic domains, but it’s not domain-specific “natural” gifts... it’s a natural ability for focused practice.

> Even at the very top levels in most non-sport domains, there’s little evidence that “natural talent” is a hard requirement. But where it might exist, it’s most likely to show up at the beginning of the curve and the very very very top.

That may be an inspiring book, but if you're interested in what's actually known about talent, success, and the limits on our ability to measure and predict such things, then you should look elsewhere.

(Ask yourself how well the writer could explain monty hall, or regression to the mean, or what's wrong with p-values, to get an idea whether they can possibly have a handle on the material.)

Evidence has been mounting for decades that for most non-sport domains and for most people,“natural talent” is not an absolute requirement for reaching high levels of expertise.

This is probably true for programming too.

But just because something isn't necessary it doesn't imply there is a correlation.

One Muggsy Bogues proves natural height is not an absolute requirement for playing in the NBA, but does not prove that basketball players are not taller than average.

>It's like you're objecting to someone saying gymnasts are coordinated or air traffic controllers can handle a lot of stress.

It's the causation that's interesting, though: do gymnasts become coordinated through practice, or do the most coordinated people go into gymnastics? I think the OP is arguing the former, and I think I agree. In that case, programming is not unique re: intelligence.

> Just because you can do $PROFESSION doesn't make you above average in intelligence.

That was never claimed but a different claim:

FACT: The groups of people that work in certain professions definitely have _average_ differences in IQ if we can accept that IQ exists.

That is NOT to say that any individual mathematician can say that he or she is more intelligent that the rest of the population.

Understand the difference.

>Interestingly you refute the idea that intelligence determines someone’s “worth,”

Well, this is true: intelligence does _not_ determine someone's worth.

Ultimately how intelligent you are is irrelevant providing that you _can_ provide value.

>Still rats in the race, just rats with a slightly higher score on a particular dimension.

Yes, but surrounded by other rats with a slightly higher score on a particular dimension.

But anyone can still learn how to do it: abstract reasoning us not reserved for the "intelligent", it's open to anyone who is of average intelligence at least.

abstract reasoning us not reserved for the "intelligent", it's open to anyone who is of average intelligence at least.

Interestingly this implies that the average programmer does have higher than average intelligence (as measured by IQ scores).

The reasoning goes like this:

- At least average intelligence is required for abstract reasoning (and programming) - This means the the distribution of intelligence for programmers has a minimum around that of the average value - This means that the mean and median must both be above average.

I wonder if you've read the novel "Flowers for Algernon"? If not I have a feeling you'd like it (although perhaps find some of it difficult going - I certainly did).

I suggest reading the short story version, which came first.

I read the short story and found it to be an enjoyable but somewhat depressing read. Losing your intellectual abilities while being able to perceive that you’re losing them sounds awful :(

I read the synopsis and "noped" out pretty hard. I'm definitely no genius, but losing what little I have would be horrifying. All those years of work to build an understanding...gone.


Here is that exact documentary you were thinking of [0]. It's really a fantastic documentary on the incredible feats of von Neumann and worth a watch by everyone here.

That person you quoted was Edward Teller and the interview can be found at 54:58 [1].

I read somewhere that the young boy asking von Neumann a question at 6:13 [2] is actually Bill Clinton. Can anyone verify this?

[0] https://youtu.be/Y2jiQXI6nrE

[1] https://youtu.be/Y2jiQXI6nrE?t=3299

[2] https://youtu.be/Y2jiQXI6nrE?t=371

>I read somewhere that the young boy asking von Neumann a question at 6:13 [2] is actually Bill Clinton. Can anyone verify this?

The top comment poster on the other video says it's his father, Bill Walters.

Bill Clinton would have been nine years old when this was filmed. The boy in the clip looks a bit too young to have a nine year old kid.

Can't verify it 100% but the boy in the video looks similar to Bill's boyhood photos: https://images.app.goo.gl/WT6GygVPiTiKgkuY7

However, Clinton was born in 1946, and this video is from 1955, so he would have been 9. The boy in the video looks older than 9 to me.

No, it’s not Clinton.

Ha, that is indeed the exact documentary I was talking about, thanks!

This is terribly sad. It's also a good lesson in why not to tie your well-being to your identity (or anything else that's impermanent -- which, actually, includes everything). It's a lesson I'm still learning.

Anything you tie your well-being to can be taken from you. And will be in the end.

Yeah, you can't not tie your identity to your work to at least some degree, but you can invest in other things, like family and friends, that will soften the blow of losing it.

It’s a lesson in the inevitable consequences of loving something fiercely, be that your own intellect, your spouse, or a child. You may lose any of these things before you die, and the pain will probably be proportional to the love you felt. I don’t think that’s a lesson in why not to allow yourself to love that way at all.

There may be mental pathways that allow for love without attachment. I personally don’t see them yet; perhaps I need to learn more about Zen or Daoism or something. In my unenlightened state, the only way I see to avoid the pain of loss is avoiding love in the first place, which doesn’t look like a good idea.

I'm far from enlightened myself, but meditation certainly has helped me a great deal in this regard. It's not that I don't grieve loss, but it's no longer such a terrifying and identity-destroying affair.

I enjoyed your video [0], also yeesh. 90 seconds in or so, "For the moment we'll leave the General Dynamics exhibit and come over here to an exhibit we have here; this is the electronics exhibit, whereby, as you see, they have a number of, uh, gadgets, I guess you might call them that... Or is that the wrong word?"

That young man who wants to be a lawyer was actually a lot better prepared for this interview than the guy with the microphone!

In an answer to the question of why there is no evidence of intelligent life beyond earth despite the high probability of it existing, Szilárd responded: "They are already here among us – they just call themselves Hungarians." [1]

[1] https://en.wikipedia.org/wiki/The_Martians_(scientists)

For a depressing analysis on where all the Hungarian geniuses came from and why they vanished: https://slatestarcodex.com/2017/05/26/the-atomic-bomb-consid...

He mentions Curie as non-Ashkenazi, but even with her I don't know, she was born Polish with full name "Maria Salomea Skłodowska"(-Curie), her middle name "Salomea", given after her grandmother sounds to me like Jewish origin. I wonder how percentage of Nobel price winners/etc. would look like if you'd really dig deeper few generations or did genetic tests.

Marie's grandchildren are still alive and could presumably be genetically tested to see their heritage if there was any real question.


I have a hypothesis that almost all of the greatest intellectual achievements come from a very small number of bloodlines. I really doubt the idea that people are born and some just end up as the smartest in generations, I think theres more to the story

> I have a hypothesis that almost all of the greatest intellectual achievements come from a very small number of bloodlines.

Nice hypothesis, but not exactly backed up by history. So many different societies have made so many diverse contributions to innovation, I'd be pretty amazed if you could draw a cohesive line through all (or most) of them.

Also, separately, it's pretty difficult to separate "bloodlines", which I assume you mean genetically inherited traits, with socially inherited traits.

A great physicist is probably more likely than average to have offspring that are also great physicists. But is that because of their "blood" (i.e. DNA) or because the children grew up in a household exposed to physics at a much higher degree than average. The children's "blood" is an inherited DNA trait, but their upbringing is an inherited social trait.

The question boils down to the age-old nature vs. nurture argument. All signs seem to point to nurture being the far more powerful influence.

I think the underlying mechanism is simply natural selection, just like you can breed dogs, tulips or bacteria for traits, humans can be "breed" into intelectual performance. None of religions/cultures do it, with exception of Ashkenazi.

> I think the underlying mechanism is simply natural selection

Genetic natural selection takes thousands of years, or at an absolute minimum multiple generations. Social selection occurs far more rapidly, often within a single generation. A person born in the 1950s that is genetically predisposed to manual labor may do well for the first few decades of their life, but as society changes and starts valuing white collar work more, they will do far worse. Their genetics didn't change, but society did.

Those that favor nature over nurture vastly under-estimate the time-scales which it takes natural selection to occur as compared to social selection.

I think you underestimate how powerful this mechanism can be. Take 20 years for one generation, ie. 100 years is 5 generations. We're talking about roughly time from 800 CE, that is 60 generations. Imagine taking smartest people to reproduce, then taking smartest from their kids and so on - 60 times. You will see an effect. You can optimize on anything, ie. "time you can dive underwater" you will see the difference compared to other people after 60 generations, see https://www.sciencemag.org/news/2018/04/indonesian-divers-ha...

Actually this isnt really true. If you go back over the last 200 hundred years, intellectual contributions are extremely concentrated. My point was further along these lines:

But what I am saying is this, the highest levels of genius, I would argue, are categorically different than just highly intelligent people. I think theres something different about the way their brains are structured. I've interacted with some of the top minds in a few fields, and I never come off with the feeling that they are merely farther along some kind of "intelligence spectrum". It always feels as if their thinking is different, i.e. its source and methods are a different type of brain. I think theres a few mutations floating around in a few different pools of the population

I am not sure that a ‘bloodline’ is a valid genetic concept. It depends a lot who the partner is, after all they provide 50% of the offspring’s genetic material. Put another way, if you are a genius in some field, the chance of having a child with someone of equal or greater aptitude is effectively zero.

The ‘more to the story’ is possibly just affluence and family stability, which are also culturally embedded.

> Put another way, if you are a genius in some field, the chance of having a child with someone of equal or greater aptitude is effectively zero.

aka regression to the mean

> The ‘more to the story’ is possibly just affluence and family stability, which are also culturally embedded.

At a guess I would think these are the more important factors.

Being born into an affluent family gives access to higher likelihood of more varying influences and stimuli from an early age. Children are by their nature curious creatures, so having the possibility to satisfy their curiosity in more avenues should reflect on their later ability to absorb new information in these fields (because they already have an established baseline knowledge).

Family stability probably helps to support curiosity and emotional safety. When failures are treated as positive experiences ("what did we learn from this?"), as opposed to wasted effort, you are more likely to allow yourself to seek more such experiences.

Of course there are outliers. But over generations, I would expect more innovations and brilliant minds to emerge from families who can provide and support their offspring with the environment to flourish in their fields of interest.

But what I am saying is this, the highest levels of genius, I would argue, are categorically different than just highly intelligent people. I think theres something different about the way their brains are structured. I've interacted with some of the top minds in a few fields, and I never come off with the feeling that they are merely farther along some kind of "intelligence spectrum". It always feels as if their thinking is different, i.e. its source and methods are a different type of brain. I think theres a few mutations floating around in a few different pools of the population

This is just unnecessarily many moving parts. All it takes to have extra geniuses is for grad students to marry each other and have a bunch of kids, instead of dating in the general pool. Also, the thinking of regular people is different in myriad ways, if you pay attention.

this looks about up your alley: https://en.wikipedia.org/wiki/Phrenology

Interesting. I never knew about the link between Ashkenazi genetics and intelligence.

Its not talked about alot, but basically ashkenazis are the genesis of alot of modern intellectual ideas and companies. both FB and Google are ashkenazi creations

Also Bernie Madoff , Harvey Weinstein and Jeffrey Epstein; should we be quick to talk about their origin?

I do think theres something to all of it, not interested in getting my account banned though...

Uber too, Adam Neumann is Ashkenazi Jewish.

You mean wework? Was there someone at Uber as well?

Yeah wework is the opposite of a great creation.

Oops, yes of course WeWork. Weird mistake, but in my defense I had a cold.

I don't know about the genetics part, but what I have noticed is that Jewish "leaf nodes" tend to be great mathematicians or otherwise intellectual. What I mean by this is a child/person of Jewish descent, that is no longer religious and also does not have children. They are thus a leaf node in the tree. A current example would be Grigory Perelman.

I don't like to conjecture about cleverness and intellegence and especially not genetics. But if there is one thing these Jewish leaf nodes have in common it is a really good education.

I went to Budapest on a day trip (hooray for Easyjet) back in the early 2000s, and can attest to this. I definitely felt like I was a stranger in a strange land.

> In 1945, von Neumann proposed a description for a computer architecture now known as the von Neumann architecture,

Do note that he proposed a description but it wasn't his idea. https://en.wikipedia.org/wiki/First_Draft_of_a_Report_on_the...

> some on the EDVAC design team contended that the stored-program concept had evolved out of meetings at the University of Pennsylvania's Moore School of Electrical Engineering predating von Neumann's activity as a consultant there, and that much of the work represented in the First Draft was no more than a translation of the discussed concepts into the language of formal logic in which von Neumann was fluent.

he also got the US military to pay for the R&D and to make the results of all the research in the public domain. he got into a big fight with Einstein over whether or not they would do experiments at IAS (Einstein only wanted to do math and theory there)

"Differences over individual contributions and patents divided the group at the Moore School. In keeping with the spirit of academic enquiry, von Neumann was determined that advances be kept in the public domain. ECP progress reports were widely disseminated. As a consequence, the project had widespread influence. Copies of the IAS machine appeared nationally"

from: https://www.ias.edu/electronic-computer-project

He also posthumously patented a "non-von Neumann" architecture that has never been built, to my knowledge. I'm surprised that all the attention on quantum computing hasn't revived the idea.


There's been a lot of interesting research in superconducting computer architectures that bear similarities to these concepts; some of it pursuant to quantum computing, some classical. [1] DACs that use switching of Josephson junctions to store persistent current, [2, 3] logic and storage elements based on the superconducting phase change (e.g. the ability of a metal to hold a voltage differential), and even [4] rudimentary AQFP-based FPGAs! I've seen talks describing more elaborate versions of [4] but can't find a good paper. A pretty good overview of this frontier is [5].

[1] https://arxiv.org/pdf/1401.5504.pdf

[2] https://arxiv.org/pdf/1403.6423.pdf

[3] https://pdfs.semanticscholar.org/0e4e/237bf8b39600de05732d0b...

[4] https://ieeexplore.ieee.org/document/7383477

[5] http://snf.ieeecsc.org/file/8506/download?token=StMHBjkP

Many of the examples proposed sound similar to other devices and things used in - for example - flash memory, DRAM, and other similar technologies.

Based on what I could see from the patent, it looks like an exploration of using such elements in place of (then) vacuum tubes; it also seems like it might also use non-linear properties; somewhat of an "analog computer" in a way.

There also seem to be hints at these elements being used in an "artificial neuron" manner (hardware-based artificial neural networks and neurons were a topic of interest at the time).

Strangely (I may have missed it - I only skimmed the patent) the use of the transistor seems to be missing (again, not sure)...but if this is true, it may be because again - it seems to be exploring non-linear storage and response as memory and computational elements.

The ideas of using - say - capacitive and inductive elements for memory elements (at a minimum) was known back then; it was also known how to use inductive-only elements for amplification and switching purposes (google "magnetic amplifiers" - the tech goes back a long way). But this patent seems to be using both in a different manner for a combination computation and memory (again, similar to an artificial neuron).

It's a very interesting patent; in a similar scope as to Turing's writings on neural network systems. Thank you for bringing it to our attention.

Aha — Stigler's law strikes again.


That's a great article.

Whitehead's quote from that article ("Everything of importance has been said before by somebody who did not discover it") is elegantly stating that it's unoriginal turtles all the way down.

Interesting read. My explanation. There are a lot more people that are B class scientists than A class. They are more likely to stumble on new ideas, but they might be unable to articulate them or they may not have the audience that will listen. It takes an A class scientist to bring those ideas forward.

I'm disappointed Stigler's law was actually coined by Stigler.

> Stigler himself named the sociologist Robert K. Merton as the discoverer of "Stigler's law" to show that it follows its own decree, though the phenomenon had previously been noted by others.

Yes! It’s a pretty big controversy so far as computing goes and a very interesting read. ENIAC (ISBN 978-0802713483) is a good account that includes it.

I have to wonder what else gets washed away in all the myth-making about these guys.

Wasn't this preceded by Konrad Zuse's Z1 and Z3?

Can anyone recommend readings on von Neumann that highlight his non-mathematical achievements? Obviously he was primarily a physicist and mathematician, but for a non-mathematician, the long list of academic publications is hard to interpret and appreciate. For example, more in the vein of these:

- Reportedly, von Neumann possessed an eidetic memory, and so was able to recall complete novels and pages of the phone directory on command. This enabled him to accumulate an almost encyclopedic knowledge of what ever he read, such as the history of the Peloponnesian Wars, the Trial Joan of Arc and Byzantine history (Leonard, 2010). A Princeton professor of the latter topic once stated that by the time he was in his thirties, Johnny had greater expertise in Byzantine history than he did (Blair, 1957).

- ...conversing in Ancient Greek at age six...

- On his deathbed, he reportedly entertained his brother by reciting the first few lines of each page from Goethe’s Faust, word-for-word, by heart (Blair, 1957).

> Reportedly, von Neumann possessed an eidetic memory, and so was able to recall complete novels and pages of the phone directory on command. This enabled him to accumulate an almost encyclopedic knowledge of what ever he read

If he was able to recall ~entire pages of the phone directory on command, I wonder if he could also recall (near) verbatim text of all the novels he had ever read, to what degree he could do this, or to what degree he could at least comprehensively recall key points, facts, timelines.

I would think he would have spent some time speculating on how the brain stores memories, I wonder if any of his theories were ever captured in some form.

Supposedly, yes-- during his final stay in the hospital, his brother read to him from a book they'd enjoyed during their childhood, Dickens' A Tale of Two Cities. When his brother had to turn the page, John would continue the narration from memory while his brother found his place on the subsequent page.

Given that he could be occasionally absent minded, I suspect that it had to be something that piqued his interest, but his sense of what was interesting was extremely broad.

He did in fact speculate on the workings of the brain in The Computer and the Brain, which is based on a lecture series he had planned out but did not deliver. It was more in the context of automata theory, but as someone with an interest in AI, automata, and neuroscience, it was frankly rather dank[0]. A lot of the pioneering work was, and is enjoyable in part because it's original and speculative, so you don't have to master the literature to make sense of it, you can just pick a paper and go. I'd recommend reading Pitts and McCullogh, plus also Lettvin, but others might have some equally lit[1] recommendations.


0. In the contemporary sense, c.f. "cool", "dope", or "excellent"; not dank like a root cellar.

1. vide supra

Okay we get it, you blaze.

My word choice is more driven by exposure to the dataset that I'm working with. There's been some recent successes with autoencoders trained on virtual sensory input (i.e., video games) with surprising results, e.g., neural networks that can simulate the dynamics of these environments with surprising fidelity.

Of course, learning to play video games at a high level is trivially easy, as everyone in the field now knows. The next challenge is, naturally, to make money doing this. But how? After the traditional thirty seconds of research before undertaking a major project, I determined that the only way to make money from video games is to become a popular streamer.

So now I am training an agent to generate video of it playing and reacting to an imaginary game and equally fictitious Twitch viewers, with a dataset drawn from the top Fortnite streamers. The reward function is comprised of a blend of subscribers, donations, and (logarithmically scaled) misogyny in the chat. Thus far, I've only managed to create some sort of window into hell, where the "game" consists of unceasing violence, murder after murder after murder as towers of mismatched material swell and fall in ever transforming locations on the isle while the chat endlessly subscribes, spams, and emotes in cackling glee and the superimposed webcam video features a... thing with too many eyes and hands screaming incoherently.

At first I thought it was a problem with my dataset, so I started watching some of the streams myself. This has not yielded insight into the whole "nightmare vision" output of my model, but it has expanded my vocabulary on the twin subjects of combustibles and comestibles, which I feel is a reasonable trade-off for the sanity battering associated with this whole endeavour.

What blend of lovecraft and twitch is this.

Okay we get it, you blaze.

Turings Cathedral by George Dyson is fantastic https://books.google.com/books/about/Turing_s_Cathedral.html...

>He died at age 53 on February 8, 1957, at the Walter Reed Army Medical Center in Washington, D.C., under military security lest he reveal military secrets while heavily medicated.


Makes you wonder what secrets he took to the grave. Thus the military security right up to the very end.

Design details for nuclear weapons would be the obvious possibility.

He was a member of the US gov JASONs team.

John von Neumann and the Origins of Modern Computing (Aspray) is a great historical account and details many of his other contributions (e.g. game theory, automata, but especially meteorology).

This one is pretty good


(mentioned briefly in the article as "Heims's book") and maybe this one but I haven't read it.


I own a biography of von Neumann called "John von Neumann", by Norman Macrae. It is serviceable and gives you a feeling for von Neumann's life, but it is not particularly deep. It does contain examples of the sort of anecdotes you mentioned, however.

Macrae (1992) is widely mentioned in the linked article.

So he has what's called Photographic Memory ?

Not really. From the wikipedia page on eidetic memory [1]:

``` Although the terms eidetic memory and photographic memory are popularly used interchangeably,[1] they are also distinguished, with eidetic memory referring to the ability to view memories like photographs for a few minutes,[3] and photographic memory referring to the ability to recall pages of text or numbers, or similar, in great detail.[4][5] When the concepts are distinguished, eidetic memory is reported to occur in a small number of children and as something generally not found in adults,[2][6] while true photographic memory has never been demonstrated to exist. ```

[1] https://en.wikipedia.org/wiki/Eidetic_memory

I've always wondered about that, because a few savants like Kim Peek are/were able to recall everything they read.

"He could speed through a book in about an hour and remember almost everything he had read, memorizing vast amounts of information in subjects ranging from history and literature, geography and numbers to sports, music and dates. Peek read by scanning the left page with his left eye, then the right page with his right eye. According to an article in The Times newspaper, he could accurately recall the contents of at least 12,000 books."

I think the test case (which nobody has passed) for true photographic memory is: you are given an image of a bunch of random dots with no apparent structure, then you are given a second such image, and you have to mentally combine them and say what image they form. Because if you superimpose the two images of random dots — say using transparent slides, one on top of the other —- they actually form a photograph of Marilyn Monroe or something, but each image in isolation looks totally random.

If someone had photographic memory then they could do this superimposition in their memory, but it doesn’t seem like anybody can.

I don't know if I agree that it's a good test. If I had two images in front of me of random dots, I cannot superimpose them and see a picture of Marilyn Monroe, no matter how long I can look at them. (ah, actually I could go cross-eyed to make them overlap in my field of vision, but I can't do it mentally).

I could imagine a "spot the difference" type of test would be a good one though. The fields of random dots are identical save for one dot, and you have say where it is. Something like that.

Right, the point of the test is that you can't do it unless you can (relatively quickly) memorize what looks like a random picture of noise. With true photographic memory, that should be possible.

I still don't agree - I'm saying that even if I don't have to remember the field of random dots, even if it's right in front of me, I still can't see an image of Marilyn Monroe. Being able to remember it wouldn't help. If being able to remember it wouldn't help me pass the test, then the test will have a high false-negative rate, and is not a good test of memory.

I am so happy to see this article about Von Neumann on HN!!! I have been posting about him on here for years, and have read all his books, but not as many of his papers as I would like, they are really hard! I have been working for many years now on continuing his theories of weather control technology and self-replicating machines. He is my absolute personal hero and the scientist who, far above all others, I consider to be the one in whose footsteps I want to follow.

That's interesting! Have you made progress on self-replication? I think it's a very important problem. I've made some notes categorized under topics/self-replication.html in http://canonical.org/~kragen/dercuano-20191110.tar.gz, but of course I'm no Johnny von Neumann.

Von Nueman is on the record as saying he believed computers were a sub field of self replicating machines. I believe they will be very important in the future and will have many applications.

Not really, unfortunately. I think it would be fun to build a 3d printer that can replicate itself, but have not tried building any prototypes because besides being too expensive I also havent taken my research into that far enough. I am a very ambitious person and have big plans for the future! What I have done on this subject besides reading about it is very theoretical and abstract, its more of an aspiration right now than an active research project to be completely honest.

You may want to check out https://reprap.org/wiki/RepRap which is exactly this type of 3D printer. I don’t think it’s 100% though.

Well, RepRap has the philosophy that self-replicating systems such as hedgehogs and raspberries need some definite set of “vitamins” available prefabricated in their environment for self-replication, and RepRap chose things like threaded rod, NEMA motors, hotends, and Arduinos as their vitamins. This was extremely successful at making 3-D printers mainstream — every popular 3-D printer out there derives from RepRap designs — but not at producing an exponentially growing quantity of 3-D printers printed by 3-D printers.

I'm interested to hear what you think; my notes talk a lot about how to make things less expensive.

Von Neumann invented a novel paradigm for computing using harmonic integration of analog oscillations. The patent was granted after his death.

While there were a few prototypes in the 50s, the transistor killed it. A fully functional version of this computational architecture has never been built, to my knowledge:


For anyone who's interested in more detail:

This type of computer is also called a "parametron" (https://en.wikipedia.org/wiki/Parametron). A Japanese researcher named Eiichi Goto independently invented it around the same time as von Neumann, and developed it much further than von Neumann did, so the idea is more often associated with Goto than with von Neumann.

The basic idea is that if a nonlinear harmonic oscillator is driven at twice its resonant frequency, it will oscillate stably in either of two phases. The two phases represent "0" and "1". If the driving signal is switched off and on again, the oscillator will arbitrarily "pick" a phase to stabilize on. If it's exposed to an input signal from another oscillator as it's turning on, it will always pick the same phase as the input signal. This makes it possible to copy a "0" or "1" from one oscillator to another. If the oscillator is exposed to input signals from several other oscillators, it will pick by "majority vote". Finally, a NOT-gate can be built by inverting the signal polarity. This set of primitives is sufficient to build arbitrary logic gates and flip-flops.

Goto's paper has an excellent explanation and more detail:

Goto, E. (1959). The Parametron, a Digital Computing Element Which Utilizes Parametric Oscillation. Proceedings of the IRE, 47(8), 1304–1316. doi:10.1109/jrproc.1959.287195

(PDF available at https://sci-hub.tw/10.1109/JRPROC.1959.287195)

Thank you for this!

There’s also a quantum variant:


What advantages would this have?

Consciousness, primarily.

Ha! No, but the computer would resemble brain oscillations and harmonic integration much more than linear clock cycles. We still don't know much about how the brain uses rhythms and harmonies to compute, but it does (brainwave bands are octaves, I.e. harmonic doublings, 2.5,5,10,20,40hz)

Any further resources on this phenomenon? Seems very interesting as an area of theoretical exploration.

Do you have any more info on this?

Pretty good article. Can't wait to read other, less sensational articles recommended here.

One particular error in the article stood out to me: the Trinity test site is in White Sands, New Mexico, not Nevada. This was immediately noticeable because I've been to the Trinity site.

Von Neumann probes pop up in sci-fi. One of my favorite uses of them is in the Bobiverse books: https://www.goodreads.com/book/show/32109569-we-are-legion?f...

von Neumann, Oppenheimer, Bohr, Einstein, Rutherford, Turing, Teller, Szilard, Wigner, Meitner... the list goes on... -- how did that time produce so many people of colossal intellect? War certainly can't be the primary factor, given that many of them were brilliant/productive even before WWI

There is no reason to think we are no longer producing people with colossal intellect. They are probably now collaborators on a large project, since that is what math and science have become.

Plus we haven’t had enough time to make myths about the recent times. Of course the 20th century was incredible for physics, but we don’t know yet what will make the recent years special. We may, for example, in 80 years wonder about how it was that the early 21st century produced so many ambitious, large-scale experiments.

It's also possible that the greatest minds of our time aren't engaged in furthering our understanding of the universe, but instead devising more efficient ways of displaying internet advertising or faster stock trading algorithms.

I've always felt that this is USDA Prime bullshit.

Einstein wasn't wasted away his years at the patent office.

Also you could say that the greatest minds of the 20th century all engaged in coming up with bigger and bigger explosions for the military.

And so on.

That’s a great point.

The silver lining is that many of these companies publicly release their research. i.e. Rob Pike and Ken Thompson may be working on advertising, but we can still benefit from Golang.

I have no idea about the state of modern mathematics but I wouldn't have guessed that the idea of a lone mathematician has passed. What are the big projects in mathematics?

I'm not sure if it's necessarily big projects but the proving difficult theorems today has often involved the construction of huge "machinery", whole branch of math, that then get applied to simple-to-state-but-difficult-to-prove theorems, the example being Wiles using modular form theories to prove Fermet's last theorem.

And this situation comes because all (or the great majority) of the easy theorems have been proved for most established branches of math.

This also means great discoveries are coming at a later age for mathematicians, as simply getting up to speed in complex fields takes years.

All of this implies it would be hard to have another Von Neumann today.

It would take people like us (anyone who doesn't already know the big projects in math are) years of study to resolve that list in to anything more than names. However if you're looking for a modern math celebrity I'd volunteer Terrence Tao. Although only a mathematician could understand what he's working on it's clear from how he's talked about that in eighty years people will be saying, "I wonder if there will be any people like Tao in my lifetime."


Roger Penrose is still around

You’re right, math is a little bit less collaborative than other endeavors still. Lone mathematicians still make huge contributions, e.g. Perelman.

But for example, one could call the endeavor of classifying all the finite simple groups a big collaborative project.

4 on your list are Hungarian, and 3 of them went to the same high school:


Hungary had a great education system at the time.

Sadly, not anymore. The government is doing everything to dumb down education and research in academia. It's really pathetic. We are doomed for generations thanks to this.

Just a recent example: a Prezi.com founder decided to create an alternative private school to show and lead by example. They didn't get the accreditation this year. If you stick out, they shut you down.

It hasn't had that education system since World War II.

Both quality education and an appreciation for science and intellectual achievements in general, could easily be a big factor. These days, scientists, journalists and other truth-seeking professions are often criticised and discredited because the facts they find are politically inconvenient (global warming, anyone?). Education is often seen primarily as an expense, rather than an investment. People admire pop stars more than scientists. Truth is apparently whatever you strongly believe it to be, these days.

It doesn't surprise me at all that the current political climate is not great for fostering great minds.

I read that post, "great" is an understatement. Thanks for sharing!

What made the school great? Was it brilliant teachers, or some replicable system?

I have a theory that maths being taught as a skill, instead of an intuitive system, derivable from axioms as with Euclid may a cause.

László Rátz taught math to both Neumann and Wigner:


> Hungary had a great education system at the time.

I seriously doubt you could back this. You are generalizing from a single school. Might as well argue that socialist Hungary had a great education system because of Fazekas. Neither are true. I happen to have a maths teacher degree from a Hungarian university and we studied Hungarian education history and I learned much more about education systems later on my own (and this is not to say this university maths teacher course was a good one, quite the opposite). If you want to know what great education at the time looked like, read up on Summerhill -- it was founded in 1921 but humanistic education has been around for centuries.

While having a much smaller population (10M), Hungary places 4th in worldwide medal rankings on the International Mathematics Olympiad [1] behind China (1.5B), USA (300M) and Russia (150M)

[1]: https://www.imo-official.org/results_country.aspx?column=awa...

There were a few, very few special math classes that went against the system which delivered results. I went to one, I should know...

If we're talking about von Neumann, or even just the math olympiad, then clearly we're not talking about how well the education system serves the 50th percentile. It's possible for a system to be awful for most, and somehow find and train the top few percent brilliantly.

Well, that's historical. In 2019 it was 1. China 1. USA 3. S. Korea (51M) 4. N. Korea (25M). And Hungary lost to Serbia (7M).


Mathematics culture and mathematical pedagogy in Hungary has legendary reputation. I don't know how their system works now,but at least up to 80s or 90s it was seen as being the highest level. It emphasized creativity, communication and problem solving.


> but at least up to 80s or 90s it was seen as being the highest level. It emphasized creativity, communication and problem solving.

What utter baloney! There were a few, very few special math classes that went against the system which delivered results. I went to one, I should know...

Pre-WW2 Poland also had a great math achievements. https://en.wikipedia.org/wiki/Lw%C3%B3w_School_of_Mathematic...

The city Lwów, or Lviv as Ukrainians now call it, had a great school and the city changed hands during war.

Also, Polish mathematicians from other universities played an important part in breaking the Enigma. They developed the Bombe, the cryptographic machine later sent to UK, which was then refined and used to break it.


More generally, I suspect it was something about the era and its culture that valued intellect and sciences. These days people like that are often put down as nerds. Leaders and extraverts are praised and set as examples. Celebrities are also a modern invention, I see them as something quite distinct from "stars". The only requirement to be a celebrity is to be popular.

Also, these days people would rather worship CEOs.

Or great genes

Or there's some kind of magical properties of paprika that the rest of the world hasn't yet noticed ...

SlateStarCodex "The Atomic Bomb Considered As Hungarian High School Science Fair Project" https://slatestarcodex.com/2017/05/26/the-atomic-bomb-consid...

I have had the same question in my mind for a long time. My preliminary answer is TV or entertainment, in those days there were much much less distraction.

Oh... :(

Elon Musk will be this generations' Edison. Some current nobody in ML or AI research will be the Turing of our time. They're here, we just don't know it yet.

>Elon Musk will be this generations' Edison.

I hope you mean this in the literal sense, in that they are both people who have taken the scientific contributions of others for their own businesses, and somehow get the credit for work they never did.

Yes, I would have said Tesla if he was the Tesla of our generation. I'm not an Edison fan, but I appreciate the work he did for society.

Don't forget Paul Dirac, a true intellectual contemporary of von Neumann.

> how did that time produce so many people of colossal intellect?

We have since invented TV, computer games, quantitative finance and web frameworks. All these things cause massive brain drain.

Claude Shannon I'd include with Alan Turing & von Neumann. All 3 on top of the list of fathered ideas behind modern technology. IMO.

How could you forget von Braun?

That war criminal? The sooner we forget about him the better.

We must never forget him so we keep in mind that genius can be used in support of evil.

You might like "Prisoner's Dilemma: John von Neumann, Game Theory, and the Puzzle of the Bomb" (https://www.amazon.com/Prisoners-Dilemma-Neumann-Theory-Puzz...)

Awww, I was hoping they'd mention how Neumann was keen on launching a preemptive nuclear strike against the USSR.

"“If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not one o’clock?”

Hardly surprising that he was one of the influences on the character of Dr Strangelove. Was he a remarkable genius, absolutely - was he right about everything, definitely not.

Context is crucial when reading this quote now rather than the 1950s. JVN said this before a nuclear war would have been 'world ending'.

Consider that several nations had avoided war as hard as possible prior to WW2. Standing by as entire countries we absorbed by a hostile power. In the end it only resulted in much greater destruction. Many people like JVN saw the same theme playing out with the iron curtain, and the destructiveness of the weapons only increasing over time.

The thought process was:

"We need to have a destructive war now to avoid having an earth shattering war later."

There had just been two world spanning wars in their lifetime. They considered a third inevitable. If it was going to happen at some point, better that it occurs before world ending arsenals were constructed.

I am aware of the context - doesn't make it any less terrifying.

Edit: I'm going to ask you the same question I asked another commenter - do you think it would have been better for the US to have attacked the Soviets as Von Neumann and others wanted?

Of course not, but only because we already know how the story ends. If they had been correct, we wouldn't be here to say that we should have listened.

Their predictions on nuclear arsenals were correct. A nuclear war in 1965 would have been vastly worse than one in 1950.

What they got wrong is that great powers would successfully avoid war over the long term. Which at the time was a bet with very poor odds.

I seem to remember that von Neumann was also responsible for what later came to be known as "Mutually Assured Destruction", the idea through game theory that ensuring that both nations would be prevented from acting preemptively because the results would carry too great a cost.

While you were aware of the context, many others reading your quote (like me) were not. I definitely appreciated having more of the full story.

...one of the influences on the character of Dr Strangelove...

Herman Kahn being another.


Paul Boyer's assessment of the pastiche here omits von Neumann, though the environment was target-rich:

While exposing the dangers and dilemmas of deterrence theory, Kubrick also satirized contemporary military figures and strategists, probably including Henry Kissinger, the author of Nuclear Weapons and Foreign Policy (1957); physicist Edward Teller, the “father” of the H-bomb; the ex-Nazi space scientist Wernher Von Braun; and the bombastic, cigar-chomping SAC commander Curtis LeMay, who in 1957 had told a government commission assessing U.S. nuclear policy that, if a Soviet attack ever seemed likely, he planned to “knock the shit out of them before they got off the ground.” Reminded that U.S. policy rejected preemptive war, LeMay had retorted, “No, it’s not national policy, but it’s my policy.” Much of the strategic thinking that Kubrick critiques, and even some of the dialogue in “Dr. Strangelove,” came from the work of Herman Kahn of the RAND Corp., an Air Force-funded California think tank. Kubrick read Kahn’s work carefully, especially his influential On Thermonuclear War (1960). General Turgidson’s upbeat assessment of the outcome of an all-out nuclear exchange directly paraphrases Kahn’s analysis.


(Apparently he does mention von Neumann elsewhere.)

This quote perfectly illustrates the unparalleled absurdity of contemplating nuclear warfare.

He was hardly alone if favouring a "preventative war" - Curtis LeMay was another advocate - they thought it was the rational thing to do.

How someone can rationally advocate the cold blooded killing of millions is, frankly, completely beyond me.

That's because we're lucky to live in a world where the cold war stayed cold, and LeMay and von Neumann were coming from half a century of the worst slaughter the world had ever seen.

We can be thankful that the likes of Eisenhower and later Kennedy, who lived through the very same times, recoiled from the prospect of initiating a greater slaughter than WW2.

you can't attribute staying out of nuclear conflict to any head of state, morality, or even logic ( there was plenty of luck involved ).

For example, after Kennedy resigned himself to striking the sites in Cuba during the missile crisis it was someone else who told him he was wrong and reconsider. (see the documentary Fog of War)

To put a fine point on it, even after all the brinksmanship, analysis, diplomacy to resolve the Cuban missile crisis it was a long Soviet officer objecting to launch a nuclear armed torpedo in response to sounding depth charges (interpreted as an attack) from US ships. Pulling the trigger would have started a global thermonuclear war but it wasn't pulled, one officer decided that against a 2/3 vote in favor.

"Later that same day, what the White House later called "Black Saturday," the US Navy dropped a series of "signaling depth charges" (practice depth charges the size of hand grenades[114]) on a Soviet submarine (B-59) at the blockade line, unaware that it was armed with a nuclear-tipped torpedo with orders that allowed it to be used if the submarine was damaged by depth charges or surface fire.[115] As the submarine was too deep to monitor any radio traffic,[116][117] the captain of the B-59, Valentin Grigorievitch Savitsky, decided that a war might already have started and wanted to launch a nuclear torpedo.[118] The decision to launch these required agreement from all three officers on board, but one of them, Vasily Arkhipov, objected and so the nuclear launch was narrowly averted."


There are plenty of examples where, basically, a roll of the dice saved civilization during the cold war. The destruction of society during the cold war was prevented by luck as much as anything else.

> There are plenty of examples where, basically, a roll of the dice saved civilization during the cold war. The destruction of society during the cold war was prevented by luck as much as anything else.

The systems are still in place and even today the danger of "accidental" start of the complete destruction of the current civilization is completely real. I always recommend a book subtitled "Confessions of a Nuclear War Planner." I won't write the details so that this doesn't appear as an advertisement. It's worth reading the whole book to get the exact idea how fundamentally flawed the logic of those managing these systems is. Absurdly, they still think they will "win."

As the poet said "We will all go together when we go."

There is a documentary on Prime (Command and Control) that covers three incidents in the US where nuclear bombs almost went off.

It may have been the rational thing to do. That something worse didn't in fact result from avoiding pre-empting MAD build-up with an early first strike doesn't mean this was the likely outcome. They may have been right, and we just got lucky, so now it looks like they were wrong.

To build on what you are saying, let it be known that LeMay's nickname was "The Demon" and he participated in numerous war crimes involving terror bombing and civilian destruction in Japan and Korea.


Maybe you should study game theory

Their reasoning was based on the idea that a war with the Soviets was inevitable and that the US should attack before it lost the strategic advantage - given that there was no war are you actually arguing that they were correct?

What you are doing is resulting, connecting result with decision under probability. I am not sure what was probability of a war but I don’t think that because war didn’t happen that it was unprobable.

So you think an attack would have been the right thing to do?

Edit: To be fair, in the event of an Able Archer '83 war we might have all have have wished that such an attack had happened (not me though, I'd be dead). So its not impossible to construct a timeline where it was the right thing to do. I'm just curious whether if you had been a decision maker back then whether you would actually choose to do that.

Basically China is an ideological extension of the cold war. It's an oligarchal society where the people don't have a vote for their future. Versus us where there is balanced based off the vote. The two are incompatible and they will come to a head. If we pre-empted this the way Neumann, McArthur, etc wanted... we would avoid the conflict that is coming in 10-20 years.

This is the solution to most of the world's problems... If not all.

Even Bertrand Russell briefly advocated for preventive nuclear war with the USSR.

It's easy to construct rational arguments for killing millions in cold blood. Today you could easily construct a rational utilitarian argument for killing billions now to prevent even greater death and suffering later due to climate change.

In fact, it's blanket rejection of the cold-blooded killing of millions that requires (at least for most naturalists/atheists) decidedly non-rational thinking --- i.e., adhering to relevant moral principles while disbelieving in moral realism.

Von Neumann was very smart, but I think he had poor judgement when it comes to politics. Perhaps not. I don't know tbh! When I learned about his political views many years ago it actually deeply changed strongly held opinions that I had about America. He was so smart, and I want so badly to follow in his footsteps, that I said to myself "If Von Neumann believed that only America can save the world from fascism, it is enough for me, I will go on as he begun, he must have had his reasons and knew what he was doing. But I will not abandon my principles! Only change the way I go about trying to achieve them."

You recognize von Neumann's superior intellect, and yet, when it comes to his politics you find his views repulsive (your comment below) and believe that he was the one with poor judgment. Hmm! Perhaps you’re passing judgement selectively and approve only of what caters to your emotions. There's more to the world than what meets the eye, or the heart.

Unlike the commenter below, I believe political reasoning is amenable only to a small degree to first principles, for complex, non-linear systems exhibit emergent behaviors not present when investigating the parts. Instead of analytical reasoning, one ought to practice more holistic thinking.

Here's an intro video on complexity, in case you're ever interested: https://www.youtube.com/watch?v=i-ladOjo1QA

One thing that shouldn't be forgotten is that he judged the world as it looked at that the time he was living in it. He may have made a totally different decision if he was young and in the same stage of life today.

Perhaps base your reasoning on first principles instead of the von Neumann who is in your head.

Well said. I do, fwiw. Im not the kind of person who makes arguments by reference to authority often, I found his political views repulsive and confusing when I first learned about them. Its been a long journey but I believe there is a strong possibility he simply was wrong about many of the beliefs he held about politics and the course of the future of human history.

I have always wondered how close von Neumann was to the maximum human potential. Does he sit close the outer edge of possible human potential, or can greater geniuses be created? Is there anyone alive who even comes close to Johnny?

My guess is that each relatively difficult field (fields that require substantial thought... I guess you know it when you see it) has its freaks of nature: genetic anomalies that, for at least a brief moment, seem to transcend the other participants.

In the 20th century, philosophy had Saul Kripke. Among other things, he published one of the seminal papers in the (at that time) nascent area of modal logic when he was 17. I have it on good authority from a professor friend that many - who know Kripke in a professional setting - regard him as having a sort of alien intelligence.

Not to take away from Von Neumann, but there are at least a couple people each century who have these sorts of alien cognitive skills.

Also, while there is probably some truth to them having a sort of alien intelligence, we can't disregard the contexual factors of their success. Von Neumann lived in a unique academic-historical period. A period where one could make lasting, foundational contributions to a variety of disciplines.

At the most charitable, Kripke was not generous in acknowledging influences.


This is interesting. I hadn't heard of this dispute.

I just read the Marcus paper from 1961 that is essential to Smith's thesis. I also read the back-and-forth rebuttals between Smith and Soames. I read the Marcus paper before reading the Smith and Soames papers.

I don't really see how Soames' primary claim is not likely true. He says:

"Marcus, along with certain other philosophers, do deserve credit for anticipating important aspects of contemporary theories of reference. However this credit in no way diminishes the seminal role of Saul Kripke."

When reading the Marcus paper, you really have to start stretching and expanding her arguments if you want to claim that she did more than anticipate 'important aspects of contemporary theories of reference'.

It should also be noted that Timothy Williamson (Oxford) has been one of the staunchest advocates for the proper appreciation of the work that Marcus produced, and yet he doesn't agree with Smith.

But really, this is all probably secondary to the issues surrounding Kripke's importance. Naming and Necessity - like most paradigm shifting works - was not a one-trick pony. Kripke expanded on his possible world semantics, introduced distinctions like metaphysical vs epistemic necessity, laid waste to any residual belief in the merits of logical positivism, came up with the first succesful (at least, most see it as succesful) argument for the existence of synthetic a priori truths, etc. Moreover, Kripke came up with at least two fairly water tight arguments against the descriptivist theory he was going against. If Marcus was the first person to introduce this new theory of reference, than the theory was stillborn. Kripke (if we take him as having taken the theory from Marcus) actually explained the ins-and-outs of the theory, provided associated puzzles, addressed counterarguments, related it to other issues in analytical philosophy, etc.

Lastly, Naming and Necessity was not the only impressive work of Kripke's. We would have to include his work on modal logic as well as his work on Wittgenstein. There are probably a number of puzzles and counterarguments that were never published that should be included as well. For example, Kripke once attended a conference on personal identity where a philospher had just presented a new argument in his talk that elicited a standing ovation from the rest of the philosophers in the room (this basically never happens at conferences). Kripke was asked to come up and comment on this new argument. He came up and provided a water tight refutation of it. Everyone in the room was taken back by this.

> He came up and provided a water tight refutation of it. Everyone in the room was taken back by this.

His name? Albert Einstein.

Just kidding. But do you have any text of this exchange?

I don't. My philosophy of language prof in undergrad relayed it to me. She said the conference had been held in Israel. I'd start there in your search. My guess, though, is that a transcript doesn't exist. Analytical philosophy as a profession has typically been pretty piss poor for archiving conferences (whether transcripts or programs, etc.).

I read up on a little bit of what you reference and have a question...

Does Kripke describe a query language in Naming and Necessity?

I know someone who went to high school with Kripke, who told me that at that age he apparently was doing classified work for the military.

Ed Witten at Princeton, who reduced the number of dimensions in string theory, is called "The Martian" by his affectionate (but disbelieving) grad students ...

Motivation is key here. It seems that von Neumann became too fond of prestige and spy stuff. So he was less creative. Smarter than Gödel, able to recognise the importance of Gödel's work, but unable to produce anything of similar moment.

One could call it a feeling of responsibility maybe. The ongoing conflicts were defining the world and scientists (and he individually) could change the course of history. I think vNM's work is equally impressive (or even more; it's difficult to compare), maybe his main contribution is starting Game Theory[1]; but because it is spread across so many fields it doesn't look as groundbreaking.

[1]: Indeed he was so confident in his work he believed he was "Starting and ending" game theory, that there would be little else to work on after his monumental work that is OTGEB. He was very wrong of course, but like other theories of the kind (Information Theory comes to mind), this early lead is very significant to the creation of a coherent field.

'Feeling of responsibility' is a nice way of putting it but the fact is that creative geniuses are pretty irresponsible and unconscientious fellows. They abandon a lot of projects and can't force themselves to work on things that don't interest them.

We don't have any record of systems with more intelligence than humans, so the question to us is equivalent to asking what is the maximum potential of intelligence itself.

For this reason general AI may offer us great insight into our mortal coil.

Imagine a von Neumann who didn't drink and kept himself in shape.

I’ve always been curious about his drinking. Maybe the impression I have of him is misleading and he only drank at weekends or something. Not to compare my pathetic studies to him, but personally I’ve had to almost give up alcohol entirely to study math. I suppose I could also be getting old :)

He died of cancer, probably a result of nuclear testing. They didn’t fully realize how dangerous it was until later.

That's not a valid inference. Even during the era of nuclear testing, the great majority of cancers were not caused by radiation.


Smokers lungs experience something like 1000x more radiation than a visit near Chernobyl. My dad was a student of von Neumann. In the 50's it was fashionable to smoke, drink martinis, and drink lots of coffee, and exercise was for chumps. My dad didn't live even as long as von Neumann, and died of a heart attack.

VN didn’t smoke and your first sentence is obviously exaggerated.

Folks nearest to nuclear testing died early of cancer. VN didn’t smoke. Look up the movie the Conqueror.

I grew up near SSFL so have some stories too.

That's nice. It remains the case that fallout will have caused only a small increase in the national background cancer rate.


Yes, only a “small” significant increase to those nearby. And we’re not talking only about fallout but manipulation in the lab.

It’s specifically your brand of risk downplaying that led to all these early deaths. Things are much safer today because educated folks understand the risks and prepare for them.

What makes you think von Neumann got any exposure in labs? He was a mathematician, not any sort of experimentalist.

The idea that his cancer came from fallout is just moralizing nonsense, the silly logical error of thinking that the universe operates in a way that punishes badness.

He was actually at Los Alamos, he was trained as a chemical engineer; it’s hard to imagine he didn’t get invited to see the demon core and other fun stuff, or otherwise breathe in some particles.

So, you think being in the room with the "demon core" was dangerous even when it wasn't being used? The radiation from it in a non-critical state would have been very small. If he had been there during a criticality accident it would have been recorded.

I think you're just waving your hands here because you have no evidence to support the claim.

Um, yeah, I stated all the evidence at hand. If you’re looking for a strong convincing case, I ain’t giving one.

And the likely cause wouldn’t be some gamma rays, it’d be from ingesting radioactive or chemically harmful matter.

So, your theory is he licked that plutonium sphere when he was in there?

Sounds legit!

Everyone (and many family members) working at SSFL 15 years later died early of contamination. They were careless in those days. Probably had a few buckets of the stuff laying around like at the Grand Canyon gift shop.

The argument is that it's plausible, not certain.

Yes, especially at a time when they were quite nonchalant about safety.

There wasn't just testing, but a whole bunch of other nuclear and chemical research these scientists were exposed to.

I wasn't referring to age of death, but mental performance while alive.

I've recently been engrossed by the question of how geniuses and highly intelligent people think. There's bits and pieces of information online. What I'd really like to see, though, is (a) geniuses describe their thought process and (b) a smart person voicing is interior monologue while attacking a problem.

1) Do you know of any information like this? Link?

2) Otherwise: do you think in a way that's different (better) than the average person?

I heard that Tesla could effectively "run" experiments in his mind, not just visualize them. Another pointer is that some "schools of thought" divide the art of thought into ability to visualize and ability to analyze where the latter has to be built upon the former and only when the former is perfected. My personal belief is that being a genius is about the ability to keep in mind many things at a time without losing details.

I read this about Edison.

Also he read very fast, the librarians though he did not like the books that's why he was returning all of them every next week.

Also what usually geniuses can do is process information a lot. Read and write many papers. Eg Terrence Tao. His blog is always going strong, the polymath projects, plus regular teaching, plus his own (other) research.

Alexander Grothendieck, wrote books full of brand new ideas about very abstract math. Amazing insights. Ability to work with unfamiliar cutting-edge things while keeping focus, while communicating them to others.

The most brilliant guy I ever knew was a founder of the field of computational geometry. He has probably co-authored 500 papers by now. he was a paper engine and would hold court with 10 or 15 people and they would all figure out how to prove the theorems in a paper. His students said that when solving problems he just never seem to to go in the wrong direction ever at all! At age 29 - already a full professor - he got the NSF Wasserman award - top researcher - all fields of science. His dad was a farmer in Austria. He was the first computer scientist to ever get the Wasserman award. Herbert Edelsbruner.

A question to mathematicians: what does your mind look like when you prove a theorem?

There is a comment here, saying that many hard theorems require one to build a complex branch of math and use it to prove a single statement. So I asked myself: what does it really look like to prove a theorem at such level?

I can tell what's going on in a programmer's mind. Software is very much like an imaginary mechanism and software engineers are mechanics. For example, this site is a database connected with a html page, so a programmer literally imagines a big gray building that means the database, another building that means the html page and a pipe that connects them. The database has a few tables: one for user accounts, another for posts like this, another for comments. So a programmer imagines 3 big blocks inside that building that are connected with pipes transferring data. Next to the database there is a controller device that sends and receives messages in the pipe connecting it to the users. This analogy continues to tiny things like classes, methods and variables. The entire HN forum looks like a big multi-dimensional city-like structure with numerous pipes connecting pieces together. Experienced programmers not only organize this city well, but can also predict and eliminate complexity, e.g. they know that if you put that kind of building over there, others will inevitable connect to it tens of pipes and the entire city will be a mess, where you see a pipe and have no idea what it connects to and what will happen if you cut it.

I think you are experiencing "typical mind fallacy". https://www.lesswrong.com/posts/baTWMegR42PAsH9qJ/generalizi...

I imagine there are programmers who build objects by imagining people asking them questions and thinking what the objects would answer, or all kinds of other things. People for whom Spring's Repository/Service/Controller/View are real things with personalities or flavors. I tend to think in a mix of "If I could just tell the computer what to do in English, what would I say?" (function names) and "Whose responsibility is this? Is this really their job?" (classes).

In my graduate math research I had to find patterns, formalize them and eventually prove more and more little results that added up to bigger results. I went through a lot of iterations, calculations and "experiments". The thing that guided me the most was that I had a feeling that things could be simplified, streamlined and generalized. It was a feeling that didn't go away until I found what I felt like I needed to find.

I find myself in a similar situation with development. There's an unescapable feeling that things could be created or improved. The feeling doesn't go away until I've done the work.

So I think what you're talking about is more so dependent on the person's mind rather than what they're doing. Like Conway's law, I think usually systems grow to mimic the developers' way of thinking, whether those are mathematical systems or software systems.

I can say, based on proving new NP-completeness complexity theorems and also optimal algorithms in my CS Theory PhD thesis, it's like a little orgasm light turned on in my brain for about 36h. Not as intense but definitely an extreme glow of elation that I carried around for almost 2d. Nothing like it since ....

> Needless to say, von Neumann‘s main contributions to the atomic bomb would not be as a lieutenant in the reserve of the ordnance department, but rather in the concept and design of the explosive lenses that were needed to compress the plutonium core of the Fat Man weapon that was later dropped on Nagasaki.

What are the equations that govern an atomic bomb? I don't want to say that I am asking for a friend... /s

I assume that by now the relevant equations would be well known anyway?

The equations they are referring to do not concern the nuclear aspects of an atomic bomb. They describe how to set up the geometry of conventional plastic explosives (different kinds of explosives that burn at different speeds) in just the right configuration so that the propagating shock waves interfere with each other to result in a spherical shock wave that pushes the core of the bomb (the radioactive part) towards a single fixed point at the center. If any of this is off even slightly, the forces will not be in balance and the core materials will shoot out of the weak point, rather than uniformly compress to bring the nuclear material to the critical point where the fission is self-sustaining.

Nice article. Are there any good biographies of John von Neumann that are highly recommended?

EDIT: found it at the end of the article

" For anyone interested in learning more about the life and work of John von Neumann, I especially recommend his friend Stanislaw Ulam’s 1958 essay John von Neumann 1903–1957 in the Bulletin of the American Mathematical Society 64 (3) pp 1–49 and the book John von Neumann by Norman Macrae (1992). "

The Martian’s Daughter written by his daughter who was also an intellectual.


Given the success of “The Imitation Game” and “The Theory of Everything” I’m disappointed that no one has made a compelling movie about Von Neumann who I think is a far more interesting character both mentally and in terms of personality.

I highly recommend "Turing's Cathedral" by George Dyson. I easily rate it n+1 stars out of n.

There is this dual biography from 1980:

John Von Neumann and Norbert Wiener : from mathematics to the technologies of life and death by Steve J Heims


Here is a review by John McCarthy:


Apparently there are none. Judging by the Amazon reviews of Macrae, I’d pass. I found some other discussions of the question:



I tend to agree, but Macrae isn't bad.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact