"Von Neumann would carry on a conversation with my 3-year-old son, and the two of them would talk as equals, and I sometimes wondered if he used the same principle when he talked to the rest of us." - Edward Teller
See [0] for a demonstration.
I watched a documentary from the 80ies a long time ago. A mathematician (can't remember his name) who worked with von Neumann in Los Alamanos was interviewed. He described von Neumann's last weeks in the hospital - the cancer had already metastasized into his brain. The mathematician said something along this lines (I am citing from memory): "von Neumann was constantly visited by colleagues, who wanted to discuss their latest work with him. He tried to keep up, struggling, like in old times. But he couldn't. Try to imagine having one of the greatest minds maybe in the history of mankind. And then try to imagine losing this gift. I was terrible. I have never seen a man experience greater suffering."
Marina von Neumann (his daughter) later wrote this about his final weeks:
"After only a few minutes, my father made what seemed to be a very peculiar and frightening request from a man who was widely regarded as one of the greatest - if not the greatest - mathematician of the 20th century. He wanted me to give him two numbers, like 7 and 6 or 10 and 3, and ask him to tell me their sum. For as long as I can remember, I had always known that my father's major source of self-regard, what he felt to be the very essence of his being, was his incredible mental capacity. In this late stage of his illness, he must have been aware that this capacity was deteriorating rapidly, and the panic that caused was worse than any physical pain. In demanding that I test him on these elementary sums, he was seeking reassurance that at least a small fragment of this intellectual powers remained." [1]
Thanks, interesting story that I will pass on to my Dad (who worked with John von Neumann at the Institute for Advanced Study when he was young, and Edward Teller hired my Dad at Berkeley and they were life long friends).
My Dad says that von Neumann liked to play his music loud and work late at night with no interruptions. I remember Edward Teller coming to a costume party at our house and my Mom and one of her friends had a white bird costume for him to wear. He refused to wear the bird’s head, just wore the white wings. The next day in Herb Cain’s newspaper column he incorrectly talked about Edward Teller wearing an angel of peace costume.
I've taken a medication that occasionally makes me feel stupid. For a day I might be unable to continue reading a book without re-reading each sentence a few times. I might not be able to talk to people about a complicated or technical subject without stumbling over my words. I will find it difficult to think in the abstract terms, and when I listen to others who talk about abstract values and concepts I have to continually relate it back to a concrete example otherwise I'm lost. My IQ is normally between 140-155. When I go through these brain-fog days I estimate it's at around 90.
It feels terrible. When I first experienced it, I was terrified that it would be permanent because that would keep me from doing my job and keeping up with my interests and hobbies. Now I only get this brain-fog every so often. I think it helps me communicate with people better and has helped me learn patience. I finally understand how some people might be genuinely, earnestly trying to understand what I'm trying to say or teach, but can't understand it because I'm communicating it at the right level of analysis.
If you're intellectually gifted (many programmers are), you shouldn't take that for granted. You got lucky, and if you weren't lucky enough to be intelligent, it might have been impossible to do the same kind of work you do today. Please appreciate that.
In many ways what you describe feels similar to having a high IQ and ADHD. Sometimes being able to dig deeper into a topic than most but sometimes entering a bit of a “mental fog” where you’re barely able to think at an average level. But instead of medication it can be “triggered” by sudden surprises (E.g. unexpected questions or situations can derail my mind for a few minutes) or environmental/nutrition changes, or even just pack of sleep.
Still it forces you to think at different levels of mental acumen and appreciate the differences people have in mental quickness. Though it’s also a challenge when people with constant “IQ” assume you don’t know a given topic if you’re in an off moment or day. Luckily the more experienced people seem to pick up on that. In the end communicating across different intellectual levels makes for a humbling challenge.
> If you're intellectually gifted (many programmers are)
Lets not jerk each other off too hard, eh? Just like everybody in life, in any trade or profession, most programmers are just average people like everybody else. They just happened to luck out and be good at a talent that pays well.
Odds are incredibly high you aren't any smarter than somebody who paints houses for a living, fills your prescription, or plays football for the NFL. We are all just people trying to make a living -- don't ever forget that.
Accountants, pharmacists and programmers are smart. They have to be to do their job. Decorators, sports people and drivers for example can be smart but it is not required to do their job. I agree we should all respect each others talents and abilities that go beyond adding up numbers and thinking in symbols since these are boring talents anyway but let's not stretch the truth and say that everyone is equally 'smart'.
You may be using "smarter" to mean something different than IQ, but if you mean IQ then you're wrong. Different professions have different distributions of IQ, and programmers are among the professions with the highest average IQ. They definitely skew higher than house painters.
If you genuinely believe programmers have a higher iq than normal, dunning kruger would like to have a word with you.
To many engineers are way to convinced that the fact they know any kind of engineering makes them automatically smart at everything outside of their narrow domain of expertise (eg: programming).
I don't know if coding makes you smart, but the causal relationship might go the other way: you're more likely to be good at coding, the smarter you are.
There is a reason why a cashier works for minimum wage.
A few days ago I went to a grocery store to buy sparkling water. I got 16 small bottles and arranged them as a 4 by 4 grid before cashier, so she wouldn't need to spend time counting them and delaying the line. It took her over 10 seconds to count the bottles: apparently, she didn't recognize the pattern and counted them by groups of 3-4 bottles covering the counted ones with her hands to simplify the process. Programmers don't even need to multiply 4x4, they just see the answer. This scene hints that the cashier's analytical skills are next to none and this alone explains why she is a cashier.
Another example. There is a curious simple test to check your working memory capacity. Imagine a 3x3 grid and draw there words oil/gas/dry. Now read all the words that you see on the grid. Not all people have visual imaginations: some operate with graph-like structures and represent the grid as a set of logical statements: row1 is gas, row2 is oil, row3 is dry. This is fine as long as they do it efficiently. Most people in this task will resort to the snail analytical approach and will be thinking like: cell 2-1 is A, so cell next to the right is 2-2 belongs to word 2 which is oil and thus the second letter is I, so our current sequence is AI. Obviously it will take them ages to enumerate all the words this way. High level programmers can keep the entire grid in memory, either as an image or in symbolic form, and thus can enumerate words quickly. Why does this example matter? Programmers have to keep many objects and connections between them in memory.
If everybody had these raw cognitive skills, programmers would get the standard minimum wage. Same for accountants, lawyers, bankers.
>This scene hints that the cashier's analytical skills are >next to none and this alone explains why she is a cashier.
Her numerical analytics skills perhaps, but she might be great at something else. Maybe she is a poet, or a musician, or great at handling 5 kids, or something else that you are terrible at.
Just something that doesn't pay well, so she has to be a cashier.
This is kind of a form of Moravec's paradox: we assume what is easy for us must be easy for others, and what is hard for us must be also hard for others too.
I can't find the reference right now, but this was a famous problem in early testing of monkey's intelligence. They showed the monkey's pictures of humans and they couldn't distinguish them, so it was assumed they were fairly dumb. But then eventually someone figured out they were much better at distinguishing pictures of other monkeys.
Of course it's still possible that the cashier is not good at anything, but in my experience that's very rare. Most people have some skills.
Eh. Poetry and handling kids is orthogonal to intelligence.
Intelligence is a very narrow and specific skill of discerning the real. It allows us to predict things. Coincidentally, it allows to make money and thus is well paid. It's ok if others disagree with me.
My two examples above are meant to hint that intelligence consists of two distinct skills: ability to keep a still detailed image in your mind (the 3x3 grid example) and ability to analyze this still image (the counting bottles example). The latter builds on the former because if the image is blurry in your mind, there is nothing to analyze (you can't see words in the 3x3 grid if it keeps floating away). We can go further and divide the two skills into say 10 distinct levels of mastery, define their characteristics, but the point remains the same: it's a steep ladder that has to be climbed if one wants to get this skill.
Poetry, music and handling kids are different skills and are of no help in climbing the ladder of intelligence. No, I agree that compassion and other skills can be as useful as intelligence, but they are different skills and have to be mastered separately.
Handling kids maybe but poetry and music are only orthogonal to intelligence at ameteur level. Professional poets and musicians tend to be highly intelligent even by your definition.
It took me about five minutes to even understand the task. I first pictured a 3x3 grid with a word in each box. I thought I was supposed to populate the boxes with any word from the list -- oil, gas, or dry -- and then read them back.
I guess I'm living proof that not all programmers are smart.
Walking home that day I was obsessed with this. I guess I did okay. Found at least one five-letter word ("grail", I remember). I can't see all the letters at the same time, so I feel like I'm cheating when I do it.
On the other hand even when I think of a three letter word I don't see the letters, I just kind of know what they are.
What absolute nonsense. Interestingly you refute the idea that intelligence determines someone’s “worth,” yet from the start you read that into the original comment even though the commenter said nothing to suggest that viewpoint.
Your second paragraph is wholly unrelated since the OC isnt saying that every programmer is smarter than everyone else. And your first is trite humanist garbage.
“Just happened to luck out and be good at a talent that pays well”
And, coincidentally, that “talent” (abstract thinking, clarity and extended focus of thought) “””happen””” to be highly correlated with higher IQ.
Still rats in the race, just rats with a slightly higher score on a particular dimension.
It's not nonsense. I can't stand people who post crap like "OMG I can program so I have to have a higher than average IQ". And then they link to articles that support their little circle jerk. No. Sorry. You don't. You are average. I'm average. We are all average. Just 'cause you can code doesn't mean you are suddenly god's gift to intelligence. You aren't. Really. Sorry. You are just as stupid as everybody else.
John von Neumann. Dude is a genius. Einstein. Also. you? me? sorry. Doesn't matter if you can code. People can paint cars way better than you. They can make a perfect french fry. They can inspect a clogged sewer line way better than you. They know just the best ways to start an IV. Who is more intelligent?
Just because you can do $PROFESSION doesn't make you above average in intelligence. Period. Full stop. To think any other way (and for somebody to post a link to some BS article supporting their assertion) is completely absolutely 100% arrogant BS. Get off your high horse people.
You're stunning me here. It's like you're objecting to someone saying gymnasts are coordinated or air traffic controllers can handle a lot of stress. Intelligence is a core qualification for programming, like height for basketball. You can't say "I play basketball, so I'm taller than others", but you can say "We're all basketball players, so we're likely taller than most". We're _selected_ for intelligence.
> Evidence has been mounting for decades that for most non-sport domains and for most people,“natural talent” is not an absolute requirement for reaching high levels of expertise.
> Other than the sports that depend on specific physical prerequisites, few domains have hard genetic limits for expertise.There is a way in which natural ability might contribute to high expertise in non-athletic domains, but it’s not domain-specific “natural” gifts... it’s a natural ability for focused practice.
…
> Even at the very top levels in most non-sport domains, there’s little evidence that “natural talent” is a hard requirement. But where it might exist, it’s most likely to show up at the beginning of the curve and the very very very top.
That may be an inspiring book, but if you're interested in what's actually known about talent, success, and the limits on our ability to measure and predict such things, then you should look elsewhere.
(Ask yourself how well the writer could explain monty hall, or regression to the mean, or what's wrong with p-values, to get an idea whether they can possibly have a handle on the material.)
Evidence has been mounting for decades that for most non-sport domains and for most people,“natural talent” is not an absolute requirement for reaching high levels of expertise.
This is probably true for programming too.
But just because something isn't necessary it doesn't imply there is a correlation.
One Muggsy Bogues proves natural height is not an absolute requirement for playing in the NBA, but does not prove that basketball players are not taller than average.
>It's like you're objecting to someone saying gymnasts are coordinated or air traffic controllers can handle a lot of stress.
It's the causation that's interesting, though: do gymnasts become coordinated through practice, or do the most coordinated people go into gymnastics? I think the OP is arguing the former, and I think I agree. In that case, programming is not unique re: intelligence.
>Interestingly you refute the idea that intelligence determines someone’s “worth,”
Well, this is true: intelligence does _not_ determine someone's worth.
Ultimately how intelligent you are is irrelevant providing that you _can_ provide value.
>Still rats in the race, just rats with a slightly higher score on a particular dimension.
Yes, but surrounded by other rats with a slightly higher score on a particular dimension.
But anyone can still learn how to do it: abstract reasoning us not reserved for the "intelligent", it's open to anyone who is of average intelligence at least.
abstract reasoning us not reserved for the "intelligent", it's open to anyone who is of average intelligence at least.
Interestingly this implies that the average programmer does have higher than average intelligence (as measured by IQ scores).
The reasoning goes like this:
- At least average intelligence is required for abstract reasoning (and programming)
- This means the the distribution of intelligence for programmers has a minimum around that of the average value
- This means that the mean and median must both be above average.
The sad reality is that cognitive decline over age is more likely than not.
I have prebuilt a set of habits that I hope will serve me for when I need to rely on them to get me through the day.
I had a conversation with an elderly neurologist at a wedding on this topic. His top item of advice was to do things that put your awareness into your moving body as much as possible, as a daily habit. No surprise he and his wife were the only folks in their age bracket rocking out on the dance floor later.
Interesting. Does that mean that mostly relying on muscle memory is not as beneficial in this context? E.g. if you're already good at some technical sport (e.g. tennis, basketball), playing it wouldn't be as beneficial as, say, learning a new dance?
Glasses and ID together.
Car keys on top of work items like laptop and badge.
White board next to front door with dates and reminders and mail to drop off.
Arrange things like you would for a Sims character.
I wonder if you've read the novel "Flowers for Algernon"? If not I have a feeling you'd like it (although perhaps find some of it difficult going - I certainly did).
I read the short story and found it to be an enjoyable but somewhat depressing read. Losing your intellectual abilities while being able to perceive that you’re losing them sounds awful :(
I read the synopsis and "noped" out pretty hard. I'm definitely no genius, but losing what little I have would be horrifying. All those years of work to build an understanding...gone.
Here is that exact documentary you were thinking of [0]. It's really a fantastic documentary on the incredible feats of von Neumann and worth a watch by everyone here.
That person you quoted was Edward Teller and the interview can be found at 54:58 [1].
I read somewhere that the young boy asking von Neumann a question at 6:13 [2] is actually Bill Clinton. Can anyone verify this?
This is terribly sad. It's also a good lesson in why not to tie your well-being to your identity (or anything else that's impermanent -- which, actually, includes everything). It's a lesson I'm still learning.
Yeah, you can't not tie your identity to your work to at least some degree, but you can invest in other things, like family and friends, that will soften the blow of losing it.
It’s a lesson in the inevitable consequences of loving something fiercely, be that your own intellect, your spouse, or a child. You may lose any of these things before you die, and the pain will probably be proportional to the love you felt. I don’t think that’s a lesson in why not to allow yourself to love that way at all.
There may be mental pathways that allow for love without attachment. I personally don’t see them yet; perhaps I need to learn more about Zen or Daoism or something. In my unenlightened state, the only way I see to avoid the pain of loss is avoiding love in the first place, which doesn’t look like a good idea.
I'm far from enlightened myself, but meditation certainly has helped me a great deal in this regard. It's not that I don't grieve loss, but it's no longer such a terrifying and identity-destroying affair.
I enjoyed your video [0], also yeesh. 90 seconds in or so, "For the moment we'll leave the General Dynamics exhibit and come over here to an exhibit we have here; this is the electronics exhibit, whereby, as you see, they have a number of, uh, gadgets, I guess you might call them that... Or is that the wrong word?"
That young man who wants to be a lawyer was actually a lot better prepared for this interview than the guy with the microphone!
In an answer to the question of why there is no evidence of intelligent life beyond earth despite the high probability of it existing, Szilárd responded: "They are already here among us – they just call themselves Hungarians." [1]
He mentions Curie as non-Ashkenazi, but even with her I don't know, she was born Polish with full name "Maria Salomea Skłodowska"(-Curie), her middle name "Salomea", given after her grandmother sounds to me like Jewish origin. I wonder how percentage of Nobel price winners/etc. would look like if you'd really dig deeper few generations or did genetic tests.
I have a hypothesis that almost all of the greatest intellectual achievements come from a very small number of bloodlines. I really doubt the idea that people are born and some just end up as the smartest in generations, I think theres more to the story
> I have a hypothesis that almost all of the greatest intellectual achievements come from a very small number of bloodlines.
Nice hypothesis, but not exactly backed up by history. So many different societies have made so many diverse contributions to innovation, I'd be pretty amazed if you could draw a cohesive line through all (or most) of them.
Also, separately, it's pretty difficult to separate "bloodlines", which I assume you mean genetically inherited traits, with socially inherited traits.
A great physicist is probably more likely than average to have offspring that are also great physicists. But is that because of their "blood" (i.e. DNA) or because the children grew up in a household exposed to physics at a much higher degree than average. The children's "blood" is an inherited DNA trait, but their upbringing is an inherited social trait.
The question boils down to the age-old nature vs. nurture argument. All signs seem to point to nurture being the far more powerful influence.
I think the underlying mechanism is simply natural selection, just like you can breed dogs, tulips or bacteria for traits, humans can be "breed" into intelectual performance. None of religions/cultures do it, with exception of Ashkenazi.
> I think the underlying mechanism is simply natural selection
Genetic natural selection takes thousands of years, or at an absolute minimum multiple generations. Social selection occurs far more rapidly, often within a single generation. A person born in the 1950s that is genetically predisposed to manual labor may do well for the first few decades of their life, but as society changes and starts valuing white collar work more, they will do far worse. Their genetics didn't change, but society did.
Those that favor nature over nurture vastly under-estimate the time-scales which it takes natural selection to occur as compared to social selection.
I think you underestimate how powerful this mechanism can be. Take 20 years for one generation, ie. 100 years is 5 generations. We're talking about roughly time from 800 CE, that is 60 generations. Imagine taking smartest people to reproduce, then taking smartest from their kids and so on - 60 times. You will see an effect. You can optimize on anything, ie. "time you can dive underwater" you will see the difference compared to other people after 60 generations, see https://www.sciencemag.org/news/2018/04/indonesian-divers-ha...
Actually this isnt really true. If you go back over the last 200 hundred years, intellectual contributions are extremely concentrated. My point was further along these lines:
But what I am saying is this, the highest levels of genius, I would argue, are categorically different than just highly intelligent people. I think theres something different about the way their brains are structured. I've interacted with some of the top minds in a few fields, and I never come off with the feeling that they are merely farther along some kind of "intelligence spectrum". It always feels as if their thinking is different, i.e. its source and methods are a different type of brain. I think theres a few mutations floating around in a few different pools of the population
I am not sure that a ‘bloodline’ is a valid genetic concept. It depends a lot who the partner is, after all they provide 50% of the offspring’s genetic material. Put another way, if you are a genius in some field, the chance of having a child with someone of equal or greater aptitude is effectively zero.
The ‘more to the story’ is possibly just affluence and family stability, which are also culturally embedded.
> The ‘more to the story’ is possibly just affluence and family stability, which are also culturally embedded.
At a guess I would think these are the more important factors.
Being born into an affluent family gives access to higher likelihood of more varying influences and stimuli from an early age. Children are by their nature curious creatures, so having the possibility to satisfy their curiosity in more avenues should reflect on their later ability to absorb new information in these fields (because they already have an established baseline knowledge).
Family stability probably helps to support curiosity and emotional safety. When failures are treated as positive experiences ("what did we learn from this?"), as opposed to wasted effort, you are more likely to allow yourself to seek more such experiences.
Of course there are outliers. But over generations, I would expect more innovations and brilliant minds to emerge from families who can provide and support their offspring with the environment to flourish in their fields of interest.
But what I am saying is this, the highest levels of genius, I would argue, are categorically different than just highly intelligent people. I think theres something different about the way their brains are structured. I've interacted with some of the top minds in a few fields, and I never come off with the feeling that they are merely farther along some kind of "intelligence spectrum". It always feels as if their thinking is different, i.e. its source and methods are a different type of brain. I think theres a few mutations floating around in a few different pools of the population
This is just unnecessarily many moving parts. All it takes to have extra geniuses is for grad students to marry each other and have a bunch of kids, instead of dating in the general pool. Also, the thinking of regular people is different in myriad ways, if you pay attention.
Its not talked about alot, but basically ashkenazis are the genesis of alot of modern intellectual ideas and companies. both FB and Google are ashkenazi creations
I don't know about the genetics part, but what I have noticed is that Jewish "leaf nodes" tend to be great mathematicians or otherwise intellectual. What I mean by this is a child/person of Jewish descent, that is no longer religious and also does not have children. They are thus a leaf node in the tree. A current example would be Grigory Perelman.
I don't like to conjecture about cleverness and intellegence and especially not genetics. But if there is one thing these Jewish leaf nodes have in common it is a really good education.
I went to Budapest on a day trip (hooray for Easyjet) back in the early 2000s, and can attest to this. I definitely felt like I was a stranger in a strange land.
> some on the EDVAC design team contended that the stored-program concept had evolved out of meetings at the University of Pennsylvania's Moore School of Electrical Engineering predating von Neumann's activity as a consultant there, and that much of the work represented in the First Draft was no more than a translation of the discussed concepts into the language of formal logic in which von Neumann was fluent.
he also got the US military to pay for the R&D and to make the results of all the research in the public domain. he got into a big fight with Einstein over whether or not they would do experiments at IAS (Einstein only wanted to do math and theory there)
"Differences over individual contributions and patents divided the group at the Moore School. In keeping with the spirit of academic enquiry, von Neumann was determined that advances be kept in the public domain. ECP progress reports were widely disseminated. As a consequence, the project had widespread influence. Copies of the IAS machine appeared nationally"
He also posthumously patented a "non-von Neumann" architecture that has never been built, to my knowledge. I'm surprised that all the attention on quantum computing hasn't revived the idea.
There's been a lot of interesting research in superconducting computer architectures that bear similarities to these concepts; some of it pursuant to quantum computing, some classical. [1] DACs that use switching of Josephson junctions to store persistent current, [2, 3] logic and storage elements based on the superconducting phase change (e.g. the ability of a metal to hold a voltage differential), and even [4] rudimentary AQFP-based FPGAs! I've seen talks describing more elaborate versions of [4] but can't find a good paper. A pretty good overview of this frontier is [5].
Many of the examples proposed sound similar to other devices and things used in - for example - flash memory, DRAM, and other similar technologies.
Based on what I could see from the patent, it looks like an exploration of using such elements in place of (then) vacuum tubes; it also seems like it might also use non-linear properties; somewhat of an "analog computer" in a way.
There also seem to be hints at these elements being used in an "artificial neuron" manner (hardware-based artificial neural networks and neurons were a topic of interest at the time).
Strangely (I may have missed it - I only skimmed the patent) the use of the transistor seems to be missing (again, not sure)...but if this is true, it may be because again - it seems to be exploring non-linear storage and response as memory and computational elements.
The ideas of using - say - capacitive and inductive elements for memory elements (at a minimum) was known back then; it was also known how to use inductive-only elements for amplification and switching purposes (google "magnetic amplifiers" - the tech goes back a long way). But this patent seems to be using both in a different manner for a combination computation and memory (again, similar to an artificial neuron).
It's a very interesting patent; in a similar scope as to Turing's writings on neural network systems. Thank you for bringing it to our attention.
Whitehead's quote from that article ("Everything of importance has been said before by somebody who did not discover it") is elegantly stating that it's unoriginal turtles all the way down.
Interesting read.
My explanation. There are a lot more people that are B class scientists than A class. They are more likely to stumble on new ideas, but they might be unable to articulate them or they may not have the audience that will listen. It takes an A class scientist to bring those ideas forward.
> Stigler himself named the sociologist Robert K. Merton as the discoverer of "Stigler's law" to show that it follows its own decree, though the phenomenon had previously been noted by others.
Yes! It’s a pretty big controversy so far as computing goes and a very interesting read. ENIAC (ISBN 978-0802713483) is a good account that includes it.
I have to wonder what else gets washed away in all the myth-making about these guys.
Can anyone recommend readings on von Neumann that highlight his non-mathematical achievements? Obviously he was primarily a physicist and mathematician, but for a non-mathematician, the long list of academic publications is hard to interpret and appreciate. For example, more in the vein of these:
- Reportedly, von Neumann possessed an eidetic memory, and so was able to recall complete novels and pages of the phone directory on command. This enabled him to accumulate an almost encyclopedic knowledge of what ever he read, such as the history of the Peloponnesian Wars, the Trial Joan of Arc and Byzantine history (Leonard, 2010). A Princeton professor of the latter topic once stated that by the time he was in his thirties, Johnny had greater expertise in Byzantine history than he did (Blair, 1957).
- ...conversing in Ancient Greek at age six...
- On his deathbed, he reportedly entertained his brother by reciting the first few lines of each page from Goethe’s Faust, word-for-word, by heart (Blair, 1957).
> Reportedly, von Neumann possessed an eidetic memory, and so was able to recall complete novels and pages of the phone directory on command. This enabled him to accumulate an almost encyclopedic knowledge of what ever he read
If he was able to recall ~entire pages of the phone directory on command, I wonder if he could also recall (near) verbatim text of all the novels he had ever read, to what degree he could do this, or to what degree he could at least comprehensively recall key points, facts, timelines.
I would think he would have spent some time speculating on how the brain stores memories, I wonder if any of his theories were ever captured in some form.
Supposedly, yes-- during his final stay in the hospital, his brother read to him from a book they'd enjoyed during their childhood, Dickens' A Tale of Two Cities.
When his brother had to turn the page, John would continue the narration from memory while his brother found his place on the subsequent page.
Given that he could be occasionally absent minded, I suspect that it had to be something that piqued his interest, but his sense of what was interesting was extremely broad.
He did in fact speculate on the workings of the brain in The Computer and the Brain, which is based on a lecture series he had planned out but did not deliver.
It was more in the context of automata theory, but as someone with an interest in AI, automata, and neuroscience, it was frankly rather dank[0].
A lot of the pioneering work was, and is enjoyable in part because it's original and speculative, so you don't have to master the literature to make sense of it, you can just pick a paper and go. I'd recommend reading Pitts and McCullogh, plus also Lettvin, but others might have some equally lit[1] recommendations.
--
0. In the contemporary sense, c.f. "cool", "dope", or "excellent"; not dank like a root cellar.
My word choice is more driven by exposure to the dataset that I'm working with.
There's been some recent successes with autoencoders trained on virtual sensory input (i.e., video games) with surprising results, e.g., neural networks that can simulate the dynamics of these environments with surprising fidelity.
Of course, learning to play video games at a high level is trivially easy, as everyone in the field now knows.
The next challenge is, naturally, to make money doing this.
But how?
After the traditional thirty seconds of research before undertaking a major project, I determined that the only way to make money from video games is to become a popular streamer.
So now I am training an agent to generate video of it playing and reacting to an imaginary game and equally fictitious Twitch viewers, with a dataset drawn from the top Fortnite streamers.
The reward function is comprised of a blend of subscribers, donations, and (logarithmically scaled) misogyny in the chat.
Thus far, I've only managed to create some sort of window into hell, where the "game" consists of unceasing violence, murder after murder after murder as towers of mismatched material swell and fall in ever transforming locations on the isle while the chat endlessly subscribes, spams, and emotes in cackling glee and the superimposed webcam video features a... thing with too many eyes and hands screaming incoherently.
At first I thought it was a problem with my dataset, so I started watching some of the streams myself.
This has not yielded insight into the whole "nightmare vision" output of my model, but it has expanded my vocabulary on the twin subjects of combustibles and comestibles, which I feel is a reasonable trade-off for the sanity battering associated with this whole endeavour.
>He died at age 53 on February 8, 1957, at the Walter Reed Army Medical Center in Washington, D.C., under military security lest he reveal military secrets while heavily medicated.
John von Neumann and the Origins of Modern Computing (Aspray) is a great historical account and details many of his other contributions (e.g. game theory, automata, but especially meteorology).
I own a biography of von Neumann called "John von Neumann", by Norman Macrae. It is serviceable and gives you a feeling for von Neumann's life, but it is not particularly deep. It does contain examples of the sort of anecdotes you mentioned, however.
Not really. From the wikipedia page on eidetic memory [1]:
```
Although the terms eidetic memory and photographic memory are popularly used interchangeably,[1] they are also distinguished, with eidetic memory referring to the ability to view memories like photographs for a few minutes,[3] and photographic memory referring to the ability to recall pages of text or numbers, or similar, in great detail.[4][5] When the concepts are distinguished, eidetic memory is reported to occur in a small number of children and as something generally not found in adults,[2][6] while true photographic memory has never been demonstrated to exist.
```
I've always wondered about that, because a few savants like Kim Peek are/were able to recall everything they read.
"He could speed through a book in about an hour and remember almost everything he had read, memorizing vast amounts of information in subjects ranging from history and literature, geography and numbers to sports, music and dates. Peek read by scanning the left page with his left eye, then the right page with his right eye. According to an article in The Times newspaper, he could accurately recall the contents of at least 12,000 books."
I think the test case (which nobody has passed) for true photographic memory is: you are given an image of a bunch of random dots with no apparent structure, then you are given a second such image, and you have to mentally combine them and say what image they form. Because if you superimpose the two images of random dots — say using transparent slides, one on top of the other —- they actually form a photograph of Marilyn Monroe or something, but each image in isolation looks totally random.
If someone had photographic memory then they could do this superimposition in their memory, but it doesn’t seem like anybody can.
I don't know if I agree that it's a good test. If I had two images in front of me of random dots, I cannot superimpose them and see a picture of Marilyn Monroe, no matter how long I can look at them. (ah, actually I could go cross-eyed to make them overlap in my field of vision, but I can't do it mentally).
I could imagine a "spot the difference" type of test would be a good one though. The fields of random dots are identical save for one dot, and you have say where it is. Something like that.
Right, the point of the test is that you can't do it unless you can (relatively quickly) memorize what looks like a random picture of noise. With true photographic memory, that should be possible.
I still don't agree - I'm saying that even if I don't have to remember the field of random dots, even if it's right in front of me, I still can't see an image of Marilyn Monroe. Being able to remember it wouldn't help. If being able to remember it wouldn't help me pass the test, then the test will have a high false-negative rate, and is not a good test of memory.
I am so happy to see this article about Von Neumann on HN!!! I have been posting about him on here for years, and have read all his books, but not as many of his papers as I would like, they are really hard! I have been working for many years now on continuing his theories of weather control technology and self-replicating machines. He is my absolute personal hero and the scientist who, far above all others, I consider to be the one in whose footsteps I want to follow.
That's interesting! Have you made progress on self-replication? I think it's a very important problem. I've made some notes categorized under topics/self-replication.html in http://canonical.org/~kragen/dercuano-20191110.tar.gz, but of course I'm no Johnny von Neumann.
Von Nueman is on the record as saying he believed computers were a sub field of self replicating machines. I believe they will be very important in the future and will have many applications.
Not really, unfortunately. I think it would be fun to build a 3d printer that can replicate itself, but have not tried building any prototypes because besides being too expensive I also havent taken my research into that far enough. I am a very ambitious person and have big plans for the future! What I have done on this subject besides reading about it is very theoretical and abstract, its more of an aspiration right now than an active research project to be completely honest.
Well, RepRap has the philosophy that self-replicating systems such as hedgehogs and raspberries need some definite set of “vitamins” available prefabricated in their environment for self-replication, and RepRap chose things like threaded rod, NEMA motors, hotends, and Arduinos as their vitamins. This was extremely successful at making 3-D printers mainstream — every popular 3-D printer out there derives from RepRap designs — but not at producing an exponentially growing quantity of 3-D printers printed by 3-D printers.
Von Neumann invented a novel paradigm for computing using harmonic integration of analog oscillations. The patent was granted after his death.
While there were a few prototypes in the 50s, the transistor killed it. A fully functional version of this computational architecture has never been built, to my knowledge:
This type of computer is also called a "parametron" (https://en.wikipedia.org/wiki/Parametron). A Japanese researcher named Eiichi Goto independently invented it around the same time as von Neumann, and developed it much further than von Neumann did, so the idea is more often associated with Goto than with von Neumann.
The basic idea is that if a nonlinear harmonic oscillator is driven at twice its resonant frequency, it will oscillate stably in either of two phases. The two phases represent "0" and "1". If the driving signal is switched off and on again, the oscillator will arbitrarily "pick" a phase to stabilize on. If it's exposed to an input signal from another oscillator as it's turning on, it will always pick the same phase as the input signal. This makes it possible to copy a "0" or "1" from one oscillator to another. If the oscillator is exposed to input signals from several other oscillators, it will pick by "majority vote". Finally, a NOT-gate can be built by inverting the signal polarity. This set of primitives is sufficient to build arbitrary logic gates and flip-flops.
Goto's paper has an excellent explanation and more detail:
Goto, E. (1959). The Parametron, a Digital Computing Element Which Utilizes Parametric Oscillation. Proceedings of the IRE, 47(8), 1304–1316. doi:10.1109/jrproc.1959.287195
Ha! No, but the computer would resemble brain oscillations and harmonic integration much more than linear clock cycles. We still don't know much about how the brain uses rhythms and harmonies to compute, but it does (brainwave bands are octaves, I.e. harmonic doublings, 2.5,5,10,20,40hz)
Pretty good article. Can't wait to read other, less sensational articles recommended here.
One particular error in the article stood out to me: the Trinity test site is in White Sands, New Mexico, not Nevada. This was immediately noticeable because I've been to the Trinity site.
von Neumann, Oppenheimer, Bohr, Einstein, Rutherford, Turing, Teller, Szilard, Wigner, Meitner... the list goes on... -- how did that time produce so many people of colossal intellect?
War certainly can't be the primary factor, given that many of them were brilliant/productive even before WWI
There is no reason to think we are no longer producing people with colossal intellect. They are probably now collaborators on a large project, since that is what math and science have become.
Plus we haven’t had enough time to make myths about the recent times. Of course the 20th century was incredible for physics, but we don’t know yet what will make the recent years special. We may, for example, in 80 years wonder about how it was that the early 21st century produced so many ambitious, large-scale experiments.
It's also possible that the greatest minds of our time aren't engaged in furthering our understanding of the universe, but instead devising more efficient ways of displaying internet advertising or faster stock trading algorithms.
The silver lining is that many of these companies publicly release their research. i.e. Rob Pike and Ken Thompson may be working on advertising, but we can still benefit from Golang.
I have no idea about the state of modern mathematics but I wouldn't have guessed that the idea of a lone mathematician has passed. What are the big projects in mathematics?
I'm not sure if it's necessarily big projects but the proving difficult theorems today has often involved the construction of huge "machinery", whole branch of math, that then get applied to simple-to-state-but-difficult-to-prove theorems, the example being Wiles using modular form theories to prove Fermet's last theorem.
And this situation comes because all (or the great majority) of the easy theorems have been proved for most established branches of math.
This also means great discoveries are coming at a later age for mathematicians, as simply getting up to speed in complex fields takes years.
All of this implies it would be hard to have another Von Neumann today.
It would take people like us (anyone who doesn't already know the big projects in math are) years of study to resolve that list in to anything more than names. However if you're looking for a modern math celebrity I'd volunteer Terrence Tao. Although only a mathematician could understand what he's working on it's clear from how he's talked about that in eighty years people will be saying, "I wonder if there will be any people like Tao in my lifetime."
Sadly, not anymore. The government is doing everything to dumb down education and research in academia. It's really pathetic. We are doomed for generations thanks to this.
Just a recent example: a Prezi.com founder decided to create an alternative private school to show and lead by example. They didn't get the accreditation this year. If you stick out, they shut you down.
Both quality education and an appreciation for science and intellectual achievements in general, could easily be a big factor. These days, scientists, journalists and other truth-seeking professions are often criticised and discredited because the facts they find are politically inconvenient (global warming, anyone?). Education is often seen primarily as an expense, rather than an investment. People admire pop stars more than scientists. Truth is apparently whatever you strongly believe it to be, these days.
It doesn't surprise me at all that the current political climate is not great for fostering great minds.
> Hungary had a great education system at the time.
I seriously doubt you could back this. You are generalizing from a single school. Might as well argue that socialist Hungary had a great education system because of Fazekas. Neither are true. I happen to have a maths teacher degree from a Hungarian university and we studied Hungarian education history and I learned much more about education systems later on my own (and this is not to say this university maths teacher course was a good one, quite the opposite). If you want to know what great education at the time looked like, read up on Summerhill -- it was founded in 1921 but humanistic education has been around for centuries.
While having a much smaller population (10M), Hungary places 4th in worldwide medal rankings on the International Mathematics Olympiad [1] behind China (1.5B), USA (300M) and Russia (150M)
If we're talking about von Neumann, or even just the math olympiad, then clearly we're not talking about how well the education system serves the 50th percentile. It's possible for a system to be awful for most, and somehow find and train the top few percent brilliantly.
Mathematics culture and mathematical pedagogy in Hungary has legendary reputation. I don't know how their system works now,but at least up to 80s or 90s it was seen as being the highest level. It emphasized creativity, communication and problem solving.
> but at least up to 80s or 90s it was seen as being the highest level. It emphasized creativity, communication and problem solving.
What utter baloney! There were a few, very few special math classes that went against the system which delivered results. I went to one, I should know...
The city Lwów, or Lviv as Ukrainians now call it, had a great school and the city changed hands during war.
Also, Polish mathematicians from other universities played an important part in breaking the Enigma. They developed the Bombe, the cryptographic machine later sent to UK, which was then refined and used to break it.
More generally, I suspect it was something about the era and its culture that valued intellect and sciences. These days people like that are often put down as nerds. Leaders and extraverts are praised and set as examples. Celebrities are also a modern invention, I see them as something quite distinct from "stars". The only requirement to be a celebrity is to be popular.
Also, these days people would rather worship CEOs.
I have had the same question in my mind for a long time. My preliminary answer is TV or entertainment, in those days there were much much less distraction.
Elon Musk will be this generations' Edison. Some current nobody in ML or AI research will be the Turing of our time. They're here, we just don't know it yet.
I hope you mean this in the literal sense, in that they are both people who have taken the scientific contributions of others for their own businesses, and somehow get the credit for work they never did.
"“If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not one o’clock?”
Hardly surprising that he was one of the influences on the character of Dr Strangelove. Was he a remarkable genius, absolutely - was he right about everything, definitely not.
Context is crucial when reading this quote now rather than the 1950s. JVN said this before a nuclear war would have been 'world ending'.
Consider that several nations had avoided war as hard as possible prior to WW2. Standing by as entire countries we absorbed by a hostile power. In the end it only resulted in much greater destruction. Many people like JVN saw the same theme playing out with the iron curtain, and the destructiveness of the weapons only increasing over time.
The thought process was:
"We need to have a destructive war now to avoid having an earth shattering war later."
There had just been two world spanning wars in their lifetime. They considered a third inevitable. If it was going to happen at some point, better that it occurs before world ending arsenals were constructed.
I am aware of the context - doesn't make it any less terrifying.
Edit: I'm going to ask you the same question I asked another commenter - do you think it would have been better for the US to have attacked the Soviets as Von Neumann and others wanted?
I seem to remember that von Neumann was also responsible for what later came to be known as "Mutually Assured Destruction", the idea through game theory that ensuring that both nations would be prevented from acting preemptively because the results would carry too great a cost.
Paul Boyer's assessment of the pastiche here omits von Neumann, though the environment was target-rich:
While exposing the dangers and dilemmas of deterrence theory, Kubrick also satirized contemporary military figures and strategists, probably including Henry Kissinger, the author of Nuclear Weapons and Foreign Policy (1957); physicist Edward Teller, the “father” of the H-bomb; the ex-Nazi space scientist Wernher Von Braun; and the bombastic, cigar-chomping SAC commander Curtis LeMay, who in 1957 had told a government commission assessing U.S. nuclear policy that, if a Soviet attack ever seemed likely, he planned to “knock the shit out of them before they got off the ground.” Reminded that U.S. policy rejected preemptive war, LeMay had retorted, “No, it’s not national policy, but it’s my policy.” Much of the strategic thinking that Kubrick critiques, and even some of the dialogue in “Dr. Strangelove,” came from the work of Herman Kahn of the RAND Corp., an Air Force-funded California think tank. Kubrick read Kahn’s work carefully, especially his influential On Thermonuclear War (1960). General Turgidson’s upbeat assessment of the outcome of an all-out nuclear exchange directly paraphrases Kahn’s analysis.
That's because we're lucky to live in a world where the cold war stayed cold, and LeMay and von Neumann were coming from half a century of the worst slaughter the world had ever seen.
We can be thankful that the likes of Eisenhower and later Kennedy, who lived through the very same times, recoiled from the prospect of initiating a greater slaughter than WW2.
you can't attribute staying out of nuclear conflict to any head of state, morality, or even logic ( there was plenty of luck involved ).
For example, after Kennedy resigned himself to striking the sites in Cuba during the missile crisis it was someone else who told him he was wrong and reconsider. (see the documentary Fog of War)
To put a fine point on it, even after all the brinksmanship, analysis, diplomacy to resolve the Cuban missile crisis it was a long Soviet officer objecting to launch a nuclear armed torpedo in response to sounding depth charges (interpreted as an attack) from US ships. Pulling the trigger would have started a global thermonuclear war but it wasn't pulled, one officer decided that against a 2/3 vote in favor.
"Later that same day, what the White House later called "Black Saturday," the US Navy dropped a series of "signaling depth charges" (practice depth charges the size of hand grenades[114]) on a Soviet submarine (B-59) at the blockade line, unaware that it was armed with a nuclear-tipped torpedo with orders that allowed it to be used if the submarine was damaged by depth charges or surface fire.[115] As the submarine was too deep to monitor any radio traffic,[116][117] the captain of the B-59, Valentin Grigorievitch Savitsky, decided that a war might already have started and wanted to launch a nuclear torpedo.[118] The decision to launch these required agreement from all three officers on board, but one of them, Vasily Arkhipov, objected and so the nuclear launch was narrowly averted."
There are plenty of examples where, basically, a roll of the dice saved civilization during the cold war. The destruction of society during the cold war was prevented by luck as much as anything else.
> There are plenty of examples where, basically, a roll of the dice saved civilization during the cold war. The destruction of society during the cold war was prevented by luck as much as anything else.
The systems are still in place and even today the danger of "accidental" start of the complete destruction of the current civilization is completely real. I always recommend a book subtitled "Confessions of a Nuclear War Planner." I won't write the details so that this doesn't appear as an advertisement. It's worth reading the whole book to get the exact idea how fundamentally flawed the logic of those managing these systems is. Absurdly, they still think they will "win."
As the poet said "We will all go together when we go."
It may have been the rational thing to do. That something worse didn't in fact result from avoiding pre-empting MAD build-up with an early first strike doesn't mean this was the likely outcome. They may have been right, and we just got lucky, so now it looks like they were wrong.
To build on what you are saying, let it be known that LeMay's nickname was "The Demon" and he participated in numerous war crimes involving terror bombing and civilian destruction in Japan and Korea.
Their reasoning was based on the idea that a war with the Soviets was inevitable and that the US should attack before it lost the strategic advantage - given that there was no war are you actually arguing that they were correct?
What you are doing is resulting, connecting result with decision under probability. I am not sure what was probability of a war but I don’t think that because war didn’t happen that it was unprobable.
So you think an attack would have been the right thing to do?
Edit: To be fair, in the event of an Able Archer '83 war we might have all have have wished that such an attack had happened (not me though, I'd be dead). So its not impossible to construct a timeline where it was the right thing to do. I'm just curious whether if you had been a decision maker back then whether you would actually choose to do that.
Basically China is an ideological extension of the cold war. It's an oligarchal society where the people don't have a vote for their future. Versus us where there is balanced based off the vote. The two are incompatible and they will come to a head. If we pre-empted this the way Neumann, McArthur, etc wanted... we would avoid the conflict that is coming in 10-20 years.
It's easy to construct rational arguments for killing millions in cold blood. Today you could easily construct a rational utilitarian argument for killing billions now to prevent even greater death and suffering later due to climate change.
In fact, it's blanket rejection of the cold-blooded killing of millions that requires (at least for most naturalists/atheists) decidedly non-rational thinking --- i.e., adhering to relevant moral principles while disbelieving in moral realism.
Von Neumann was very smart, but I think he had poor judgement when it comes to politics. Perhaps not. I don't know tbh! When I learned about his political views many years ago it actually deeply changed strongly held opinions that I had about America. He was so smart, and I want so badly to follow in his footsteps, that I said to myself "If Von Neumann believed that only America can save the world from fascism, it is enough for me, I will go on as he begun, he must have had his reasons and knew what he was doing. But I will not abandon my principles! Only change the way I go about trying to achieve them."
You recognize von Neumann's superior intellect, and yet, when it comes to his politics you find his views repulsive (your comment below) and believe that he was the one with poor judgment. Hmm! Perhaps you’re passing judgement selectively and approve only of what caters to your emotions. There's more to the world than what meets the eye, or the heart.
Unlike the commenter below, I believe political reasoning is amenable only to a small degree to first principles, for complex, non-linear systems exhibit emergent behaviors not present when investigating the parts. Instead of analytical reasoning, one ought to practice more holistic thinking.
One thing that shouldn't be forgotten is that he judged the world as it looked at that the time he was living in it. He may have made a totally different decision if he was young and in the same stage of life today.
Well said. I do, fwiw. Im not the kind of person who makes arguments by reference to authority often, I found his political views repulsive and confusing when I first learned about them. Its been a long journey but I believe there is a strong possibility he simply was wrong about many of the beliefs he held about politics and the course of the future of human history.
I have always wondered how close von Neumann was to the maximum human potential. Does he sit close the outer edge of possible human potential, or can greater geniuses be created? Is there anyone alive who even comes close to Johnny?
My guess is that each relatively difficult field (fields that require substantial thought... I guess you know it when you see it) has its freaks of nature: genetic anomalies that, for at least a brief moment, seem to transcend the other participants.
In the 20th century, philosophy had Saul Kripke. Among other things, he published one of the seminal papers in the (at that time) nascent area of modal logic when he was 17. I have it on good authority from a professor friend that many - who know Kripke in a professional setting - regard him as having a sort of alien intelligence.
Not to take away from Von Neumann, but there are at least a couple people each century who have these sorts of alien cognitive skills.
Also, while there is probably some truth to them having a sort of alien intelligence, we can't disregard the contexual factors of their success. Von Neumann lived in a unique academic-historical period. A period where one could make lasting, foundational contributions to a variety of disciplines.
This is interesting. I hadn't heard of this dispute.
I just read the Marcus paper from 1961 that is essential to Smith's thesis. I also read the back-and-forth rebuttals between Smith and Soames. I read the Marcus paper before reading the Smith and Soames papers.
I don't really see how Soames' primary claim is not likely true. He says:
"Marcus, along with certain other philosophers, do deserve credit for anticipating important aspects of contemporary theories of reference. However this credit in no way diminishes the seminal role of Saul Kripke."
When reading the Marcus paper, you really have to start stretching and expanding her arguments if you want to claim that she did more than anticipate 'important aspects of contemporary theories of reference'.
It should also be noted that Timothy Williamson (Oxford) has been one of the staunchest advocates for the proper appreciation of the work that Marcus produced, and yet he doesn't agree with Smith.
But really, this is all probably secondary to the issues surrounding Kripke's importance. Naming and Necessity - like most paradigm shifting works - was not a one-trick pony. Kripke expanded on his possible world semantics, introduced distinctions like metaphysical vs epistemic necessity, laid waste to any residual belief in the merits of logical positivism, came up with the first succesful (at least, most see it as succesful) argument for the existence of synthetic a priori
truths, etc. Moreover, Kripke came up with at least two fairly water tight arguments against the descriptivist theory he was going against. If Marcus was the first person to introduce this new theory of reference, than the theory was stillborn. Kripke (if we take him as having taken the theory from Marcus) actually explained the ins-and-outs of the theory, provided associated puzzles, addressed counterarguments, related it to other issues in analytical philosophy, etc.
Lastly, Naming and Necessity was not the only impressive work of Kripke's. We would have to include his work on modal logic as well as his work on Wittgenstein. There are probably a number of puzzles and counterarguments that were never published that should be included as well. For example, Kripke once attended a conference on personal identity where a philospher had just presented a new argument in his talk that elicited a standing ovation from the rest of the philosophers in the room (this basically never happens at conferences). Kripke was asked to come up and comment on this new argument. He came up and provided a water tight refutation of it. Everyone in the room was taken back by this.
I don't. My philosophy of language prof in undergrad relayed it to me. She said the conference had been held in Israel. I'd start there in your search. My guess, though, is that a transcript doesn't exist. Analytical philosophy as a profession has typically been pretty piss poor for archiving conferences (whether transcripts or programs, etc.).
Ed Witten at Princeton, who reduced the number of dimensions in string theory, is called "The Martian" by his affectionate (but disbelieving) grad students
...
Motivation is key here. It seems that von Neumann became too fond of prestige and spy stuff. So he was less creative. Smarter than Gödel, able to recognise the importance of Gödel's work, but unable to produce anything of similar moment.
One could call it a feeling of responsibility maybe. The ongoing conflicts were defining the world and scientists (and he individually) could change the course of history. I think vNM's work is equally impressive (or even more; it's difficult to compare), maybe his main contribution is starting Game Theory[1]; but because it is spread across so many fields it doesn't look as groundbreaking.
[1]: Indeed he was so confident in his work he believed he was "Starting and ending" game theory, that there would be little else to work on after his monumental work that is OTGEB. He was very wrong of course, but like other theories of the kind (Information Theory comes to mind), this early lead is very significant to the creation of a coherent field.
'Feeling of responsibility' is a nice way of putting it but the fact is that creative geniuses are pretty irresponsible and unconscientious fellows. They abandon a lot of projects and can't force themselves to work on things that don't interest them.
We don't have any record of systems with more intelligence than humans, so the question to us is equivalent to asking what is the maximum potential of intelligence itself.
For this reason general AI may offer us great insight into our mortal coil.
I’ve always been curious about his drinking. Maybe the impression I have of him is misleading and he only drank at weekends or something. Not to compare my pathetic studies to him, but personally I’ve had to almost give up alcohol entirely to study math. I suppose I could also be getting old :)
Smokers lungs experience something like 1000x more radiation than a visit near Chernobyl. My dad was a student of von Neumann. In the 50's it was fashionable to smoke, drink martinis, and drink lots of coffee, and exercise was for chumps. My dad didn't live even as long as von Neumann, and died of a heart attack.
Yes, only a “small” significant increase to those nearby. And we’re not talking only about fallout but manipulation in the lab.
It’s specifically your brand of risk downplaying that led to all these early deaths. Things are much safer today because educated folks understand the risks and prepare for them.
What makes you think von Neumann got any exposure in labs? He was a mathematician, not any sort of experimentalist.
The idea that his cancer came from fallout is just moralizing nonsense, the silly logical error of thinking that the universe operates in a way that punishes badness.
He was actually at Los Alamos, he was trained as a chemical engineer; it’s hard to imagine he didn’t get invited to see the demon core and other fun stuff, or otherwise breathe in some particles.
So, you think being in the room with the "demon core" was dangerous even when it wasn't being used? The radiation from it in a non-critical state would have been very small. If he had been there during a criticality accident it would have been recorded.
I think you're just waving your hands here because you have no evidence to support the claim.
Everyone (and many family members) working at SSFL 15 years later died early of contamination. They were careless in those days. Probably had a few buckets of the stuff laying around like at the Grand Canyon gift shop.
I've recently been engrossed by the question of how geniuses and highly intelligent people think. There's bits and pieces of information online. What I'd really like to see, though, is (a) geniuses describe their thought process and (b) a smart person voicing is interior monologue while attacking a problem.
1) Do you know of any information like this? Link?
2) Otherwise: do you think in a way that's different (better) than the average person?
I heard that Tesla could effectively "run" experiments in his mind, not just visualize them. Another pointer is that some "schools of thought" divide the art of thought into ability to visualize and ability to analyze where the latter has to be built upon the former and only when the former is perfected. My personal belief is that being a genius is about the ability to keep in mind many things at a time without losing details.
Also he read very fast, the librarians though he did not like the books that's why he was returning all of them every next week.
Also what usually geniuses can do is process information a lot. Read and write many papers. Eg Terrence Tao. His blog is always going strong, the polymath projects, plus regular teaching, plus his own (other) research.
Alexander Grothendieck, wrote books full of brand new ideas about very abstract math. Amazing insights. Ability to work with unfamiliar cutting-edge things while keeping focus, while communicating them to others.
The most brilliant guy I ever knew was a founder of the field of computational geometry. He has probably co-authored 500 papers by now. he was a paper engine and would hold court with 10 or 15 people and they would all figure out how to prove the theorems in a paper. His students said that when solving problems he just never seem to to go in the wrong direction ever at all! At age 29 - already a full professor - he got the NSF Wasserman award - top researcher - all fields of science. His dad was a farmer in Austria. He was the first computer scientist to ever get the Wasserman award. Herbert Edelsbruner.
A question to mathematicians: what does your mind look like when you prove a theorem?
There is a comment here, saying that many hard theorems require one to build a complex branch of math and use it to prove a single statement. So I asked myself: what does it really look like to prove a theorem at such level?
I can tell what's going on in a programmer's mind. Software is very much like an imaginary mechanism and software engineers are mechanics. For example, this site is a database connected with a html page, so a programmer literally imagines a big gray building that means the database, another building that means the html page and a pipe that connects them. The database has a few tables: one for user accounts, another for posts like this, another for comments. So a programmer imagines 3 big blocks inside that building that are connected with pipes transferring data. Next to the database there is a controller device that sends and receives messages in the pipe connecting it to the users. This analogy continues to tiny things like classes, methods and variables. The entire HN forum looks like a big multi-dimensional city-like structure with numerous pipes connecting pieces together. Experienced programmers not only organize this city well, but can also predict and eliminate complexity, e.g. they know that if you put that kind of building over there, others will inevitable connect to it tens of pipes and the entire city will be a mess, where you see a pipe and have no idea what it connects to and what will happen if you cut it.
I imagine there are programmers who build objects by imagining people asking them questions and thinking what the objects would answer, or all kinds of other things. People for whom Spring's Repository/Service/Controller/View are real things with personalities or flavors. I tend to think in a mix of "If I could just tell the computer what to do in English, what would I say?" (function names) and "Whose responsibility is this? Is this really their job?" (classes).
In my graduate math research I had to find patterns, formalize them and eventually prove more and more little results that added up to bigger results. I went through a lot of iterations, calculations and "experiments". The thing that guided me the most was that I had a feeling that things could be simplified, streamlined and generalized. It was a feeling that didn't go away until I found what I felt like I needed to find.
I find myself in a similar situation with development. There's an unescapable feeling that things could be created or improved. The feeling doesn't go away until I've done the work.
So I think what you're talking about is more so dependent on the person's mind rather than what they're doing. Like Conway's law, I think usually systems grow to mimic the developers' way of thinking, whether those are mathematical systems or software systems.
I can say, based on proving new NP-completeness complexity theorems and also optimal algorithms in my CS Theory PhD thesis, it's like a little orgasm light turned on in my brain for about 36h. Not as intense but definitely an extreme glow of elation that I carried around for almost 2d. Nothing like it since ....
> Needless to say, von Neumann‘s main contributions to the atomic bomb would not be as a lieutenant in the reserve of the ordnance department, but rather in the concept and design of the explosive lenses that were needed to compress the plutonium core of the Fat Man weapon that was later dropped on Nagasaki.
What are the equations that govern an atomic bomb? I don't want to say that I am asking for a friend... /s
I assume that by now the relevant equations would be well known anyway?
The equations they are referring to do not concern the nuclear aspects of an atomic bomb. They describe how to set up the geometry of conventional plastic explosives (different kinds of explosives that burn at different speeds) in just the right configuration so that the propagating shock waves interfere with each other to result in a spherical shock wave that pushes the core of the bomb (the radioactive part) towards a single fixed point at the center. If any of this is off even slightly, the forces will not be in balance and the core materials will shoot out of the weak point, rather than uniformly compress to bring the nuclear material to the critical point where the fission is self-sustaining.
Nice article. Are there any good biographies of John von Neumann that are highly recommended?
EDIT: found it at the end of the article
"
For anyone interested in learning more about the life and work of John von Neumann, I especially recommend his friend Stanislaw Ulam’s 1958 essay John von Neumann 1903–1957 in the Bulletin of the American Mathematical Society 64 (3) pp 1–49 and the book John von Neumann by Norman Macrae (1992).
"
Given the success of “The Imitation Game” and “The Theory of Everything” I’m disappointed that no one has made a compelling movie about Von Neumann who I think is a far more interesting character both mentally and in terms of personality.
I am curious to know. Why does the Jewish community have a disproportionately large number of extremely intelligent people as evidenced by the number of Nobel laureates and other great intellectuals?
I heard it's preference for non-physical work (lawyers, doctors, bankers, artists, writers...). Also, an old Polish novel "Lalka" by Bolesław Prus says Jews have an admiration for intellect and charades is one of their favorite family past-times.
Because they have higher than average intelligence, thus having "disproportionately large number of extremely intelligent people". I doubt their distribution is not normal.
I was interested about that too and after some research the likely explanation is that people with an Ashkenazi ancestry have a higher intelligence thanks to genetics.
Look up azkenazi Jews on Wikipedia, the average IQ of that subgroup is 108 with similar variance, apparently. But they also suffer from haemophilia and other genetic inbreeding disorders...
Interesting anecdote from from the biography of Stanisław Ulam, a close friend of von Neumann. Von Neumann was apparently fascinated by the history of the ancient Greece which he learned by reading Thucydides and Herodotus. Once he was talking with Stan about the Siege of Melos[1] and how violent the human nature can be when driven by pride and ambition set to pursue a certain goal. That were late 30s and catastrophes like Lidice[2] and countless others on the soil which today are Poland and Belarus were soon to happen.
“Johnny built, at the Institute for Advanced Study, an experimental elec- tronic calculator, popularly known as the joniac, which eventually became the pilot model for similar machines all over the country. Some of the basic prin- ciples developed in the joniac are used even today in the fastest and most modern calculators. To design the machine, Johnny and his co-workers tried to imitate some of the known operations of the live brain. This is the aspect which led him to study neurology, to seek
out men in the fields of neurology and psychiatry, to attend many meetings on these subjects, and, eventu- ally, to give lectures to such groups on the possibilities of copying an extremely simplified model of the living brain for man-made machines. ”
Can anyone recommend readings on von Neumann that highlight his non-mathematical achievements? Obviously he was primarily a physicist and mathematician, but for a non-mathematician, the long list of academic publications is hard to interpret and appreciate. For example, more in the vein of these:
- Reportedly, von Neumann possessed an eidetic memory, and so was able to recall complete novels and pages of the phone directory on command. This enabled him to accumulate an almost encyclopedic knowledge of what ever he read, such as the history of the Peloponnesian Wars, the Trial Joan of Arc and Byzantine history (Leonard, 2010). A Princeton professor of the latter topic once stated that by the time he was in his thirties, Johnny had greater expertise in Byzantine history than he did (Blair, 1957).
- ...conversing in Ancient Greek at age six...
- On his deathbed, he reportedly entertained his brother by reciting the first few lines of each page from Goethe’s Faust, word-for-word, by heart (Blair, 1957).
I think some of it has to do with his famously difficult personality. Colleagues found him unbearably arrogant, and he seemed to have a chip on his shoulder, probably arising from his highly contentious relationship with his father. Von Neumann on the other hand was reportedly extremely charming and funny, and great fun to have at parties.
there's a lazy sensationalist style on that article that I find off-putting -- the relevant Wikipedia page is more sober and actually (or because of that) makes you more in awe of the man in question.
"I have known a great many intelligent people in my life. I knew Planck, von Laue and Heisenberg. Paul Dirac was my brother in law; Leo Szilard and Edward Teller have been among my closest friends; and Albert Einstein was a good friend, too. But none of them had a mind as quick and acute as Jansci [John] von Neumann. I have often remarked this in the presence of those men and no one ever disputed me.
... But Einstein's understanding was deeper even than von Neumann's. His mind was both more penetrating and more original than von Neumann's. And that is a very remarkable statement. Einstein took an extraordinary pleasure in invention. Two of his greatest inventions are the Special and General Theories of Relativity; and for all of Jansci's brilliance, he never produced anything as original."
A real comparison is of course impossible, but of that coterie, Kurt Gödel is a strong contender. As the original post mentions, he singlehandedly destroyed Hilbert's formalist program (which von Neumann was also working on) and von Neumann is quoted as saying he was "in a class by himself". There is the famous letter from Gödel to von Neumann which anticipates the P=NP problem by decades.
In a different era, I think Blaise Pascal is also a contender. At age 19 he invented a working mechanical calculator with a functional carry mechanism, which somehow his contemporaries failed to achieve for the next sixty years.
In terms of sheer power of computation, I would say clearly not.
Take, for instance, his race against David Hilbert to get the equations of general relativity in correct form. It's not perfectly clear that Einstein was able to keep up with him, in finishing his own theory: https://medium.com/cantors-paradise/einstein-and-hilberts-ra...
"Most intelligent" is an impossibly hard claim to justify. Most influential physicist, though, Einstein certainly was. Von Neumann may well have been smarter, but he didn't have the impact that Einstein had. These are two different things.
Intelligence is a word, a messy abstract concept. If you treat it as an existent quality like height or weight, you end up with absurd questions that have no answers, like "Who was smarter, Picasso or Einstein?"
and it's not because Einstein was a brilliant physicits.
Intelligence refers to how well you can reason abstractly, not to how successful, how competent, how eloquent, or talented you are. for that we have those words.
and Einstein would have still been more intelligent even if he never discorverd anything.
Intelligence has been defined in many ways: the capacity for logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem solving.
Wish we had much of a clue -- we're lucky even as much of his own work survived as did. There seems to be a strong tendency to overestimate surviving authors and underestimate their civilization. (Saying this although he's a hero of mine.)
I guess Einstein was no von Neumann, but gosh what a beautiful mind -- even with so much I didn't understand in "Subtle is the Lord...", it's worth a read.
I guess it's possible that Archimedes invented the integral calculus a hundred generations before Newton and Leibniz because Syracuse was teeming with calculus-like ideas that didn't get written down, and things like the Antikythera Mechanism suggest that there were a lot of important intellectual achievements refined through broad communities of practice that didn't survive. But it seems just as likely to me that Archimedes was just that good.
(A tidbit about that Definition 5 there: I read in Russo that Galileo could not see the point of it. To him a ratio was a concrete thing, a number you got by dividing two other numbers. His civilization, says Russo, had lost the concept of freely invented axiomatic definitions and didn't really recover it till the 19th C.)
I like to think of Archimedes as the first mathematical physicist. Of course Euclidean geometry itself is a physical theory, so there's one ground to disagree.
IQ is positively correlated with health, educational achievement, job performance, income and occupational status and negatively correlated with out of wedlock childbirth and being convicted of a crime.
Intelligence: Knowns and Unknowns, American Psychological Association
people dont like being shown things that dispute their world view, especially on things which wield as much weight as intelligence, which is why i assume youve been downvoted. IQ may not be an exact metric of intelligence, but the correlations of IQ to success in the modern western world are too obvious to ignore, and indeed could be detrimental to ignore. Note, correlation does not equate causation, but suggests deeper research is required
Such claim would have been relevant if each kid of an age class (or at least several thousands, in various countries) is asked to pass an IQ test. Assuming that IQ stays constant throughout the life. And there is still the definition of "success" to discuss.
I'm pretty sure there is some bias in the selection. I've never knew someone who has effectively passed a real IQ test around me, and I guess it the same thing for most people in the western world.
On top of that, we must do not forget that we're talking here about outliers, people whose intelligence (whatever your definition of it it is) is - to quote your own words - too obvious to ignore. People at the far edge on the "Bell curve of intelligence". Nobody will argue that Hawking was brilliant, he was not in good health though.
Of course, saying that being smart helps you achieve something in life is obvious. As well as being well connected. As well as being healthy.
> Such claim would have been relevant if each kid of an age class (or at least several thousands, in various countries) is asked to pass an IQ test. Assuming that IQ stays constant throughout the life.
The earliest comprehensive national survey I’m aware of is the Scottish 1932 one[1]. Most countries with conscription/the draft administer an IQ test like the ASVAB in the US. So in the Nordic countries you have going on a century of data covering basically the entire male population at 18.
IQ becomes more stable across life. The younger a child is the more likely it is that any given test is a bad estimate of their adult IQ, which will basically be stable taking into account age related cognitive decline[2].
[1] Population sex differences in IQ at age 11: The Scottish mental survey 1932
There is uncertainty whether the sexes differ with respect to their mean levels and variabilities in mental ability test scores. Here we describe the cognitive ability distribution in 80,000+ children—almost everyone born in Scotland in 1921—tested at age 11 in 1932. There were no significant mean differences in cognitive test scores between boys and girls, but there was a highly significant difference in their standard deviations (P<.001). Boys were over-represented at the low and high extremes of cognitive ability.
[2] Intellectual Development from Early Childhood to Early Adulthood: The Impact of Early IQ Differences on Stability and Change over Time
Intellectual ability of about 200 individuals was first assessed between the ages of four to twelve years, and subsequently at the ages of 17 and 23. Stability of general intelligence was found to be moderately high for the entire study period. Stability was higher for shorter intervals between measurement points and increased with age. Subgroup analyses for initially high-, average-, and low-IQ children revealed that IQ stability over time was higher for the low-IQ than for the high-IQ children. Overall, participants with initially higher IQ scores maintained their advantage throughout the study until the period of early adulthood, and were more likely to attend higher educational tracks.
Taleb is not an expert in the field. I'm not qualified to examine the data from first principles, but my understanding is that among experts in the relevant academic fields the standard orthodoxy is that 1) IQ tests measure something meaningful and 2) whatever it is they measure is highly predictive of a wide variety of benefits in life.
"Correlation" being the key here. All of those things are also correlated with having a good upbringing and environment, as well as nutrition in childhood, as is IQ, so IQ is basically an irrelevant reflector of environment/inheritance factors.
I think moderators should mark articles behind paywalls as such, I already don't click on wapo or nytimes links but you never can tell with medium. I'm retired on a fixed income and I can't justify in my budget subscribing to these places. It makes me sad when I can't read an article that might otherwise fascinate me. One solution would be to never click on medium links, but then I'd miss the ones that are free.
Medium really botched the paywall approach and I couldn't possibly agree with you more. It is so inconsistently applied and quite honestly I can't justify spending 5 dollars a month on a "publisher" whose content is so variable. Some Medium articles are really good, others are baseless clickbait, it's hard to tell until you read the article and already sunk the time and the money.
I'm thinking of writing an extension that would go grab the first page of each link and tell me if they want me to pay. The extension would turn the link red if so. How consistently can I get this right though? I can't just go by domain name because sometimes I have free articles left, on New Yorker for example. I agree with you on Medium. I was more angry until I found out it is up to the individual authors to turn on the paywall for their article so they can get paid. 60 dollars a year for hit or miss medium content is a bit rich. They must think they are in the same league as NYT or WAPO
I have a feeling that modern society doesn't motivate scientistis and mathematicians enough. It's easier and better-paid to do some marketing, big data or game industry job. Tech startups are raising millions to develop remotely controlled lightbulbs or multiplier AR chess games and schools are lowering requirements for students to pass as politicians want to show the increasing number of graduates.
I think a big part of that is there are too many people inherently interested in doing science. In fact there are already a lot more grads coming out with science degrees than there are jobs, and math is even worse. Although these positions have less pay, the competition is probably more fierce than for the positions you mentioned (although I think game-dev has a similar issue).
This is a problem but it's a good one to have, we have a lot of people out their willing to make science their life's work even without these external motivators.
Von Neumann is something of a hero of mine, in part because of how he demonstrates (by example) that you can achieve surpassing insight by patiently applying logic and analysis to whatever problem is at hand.
But also because he shows that you didn't have to be narrowly focused in your area of competence, you could pursue other interests and even excel in those, even if they aren't the things that you're known for.
He really liked history, particularly European history.
There's some examples of this in his letters[0], but my favorite von Neumann story on this topic is how he managed to deduce the answer to a literary magazine's poetry trivia question when queried by his brother.
They would both be teenagers at the time, and the magazine was in English, as opposed to their native martian, yet John got the solution at once.
Paraphrasing from here[1], although his brother's limited run biography/reminiscences has the story too.
The prize contest had the lines
They know this well my baron and my men
Gascony, England, Normandy, Poitou
That I had never follower so low
Whom I would leave in prison to my gain
I say this not as a reproach to them
But prisoner I am
John replied immediately: "Richard Coeur de Lion".
"Did you know the poem?"
"No."
"Then how did you identify the poet?"
"Very simple" he said. "Gascony, England, Normandy, and Poitou were
in one feudal hand only once during the early Plantagenets,
and from there it was quite easy to associate with Richard's crusades
and European captivity. But of course this is a translation, since
quite obviously the early Plantagenets spoke Norman-Midieval French."
His brother then goes on to state:
I found out much later that the translation was that of Henry Adams,
and the Prison Song is only one of Richard's most perfect poems,
usually referred to as gems of English literature!
---
0. One to his daughter springs to mind, where he expresses fatherly concern about her getting married during her undergrad-- he's worried that she's too young and this will derail her career. After that's addressed, he then moves on to talking about one of her term papers, and proceeds to suggest an insightful take on-- I believe, I haven't read the letters in a decade-- some medieval French bishop. The juxtaposition of concerned father plus European historian in a man more known for axioms and automata was jarring.
It must have been so much harder to be brilliant back then. Now we Just have Google for everything. Imagine what these guys could have accomplished with the computing power/tools of today.
von Neumann was very anti-Soviet in his political views, yet he worked with (and was co-inventor on a nuclear bomb patent) with fellow nuclear weapons scientist Klaus Fuchs. Fuchs was a spy who passed von Neumann's classified work on to the Soviets. Despite his evident genius in so many areas, von Neumann apparently failed to discern the motivations of his colleague.
See [0] for a demonstration.
I watched a documentary from the 80ies a long time ago. A mathematician (can't remember his name) who worked with von Neumann in Los Alamanos was interviewed. He described von Neumann's last weeks in the hospital - the cancer had already metastasized into his brain. The mathematician said something along this lines (I am citing from memory): "von Neumann was constantly visited by colleagues, who wanted to discuss their latest work with him. He tried to keep up, struggling, like in old times. But he couldn't. Try to imagine having one of the greatest minds maybe in the history of mankind. And then try to imagine losing this gift. I was terrible. I have never seen a man experience greater suffering."
Marina von Neumann (his daughter) later wrote this about his final weeks:
"After only a few minutes, my father made what seemed to be a very peculiar and frightening request from a man who was widely regarded as one of the greatest - if not the greatest - mathematician of the 20th century. He wanted me to give him two numbers, like 7 and 6 or 10 and 3, and ask him to tell me their sum. For as long as I can remember, I had always known that my father's major source of self-regard, what he felt to be the very essence of his being, was his incredible mental capacity. In this late stage of his illness, he must have been aware that this capacity was deteriorating rapidly, and the panic that caused was worse than any physical pain. In demanding that I test him on these elementary sums, he was seeking reassurance that at least a small fragment of this intellectual powers remained." [1]
[0] https://www.youtube.com/watch?v=vLbllFHBQM4
[1] https://www.amazon.com/Martians-Daughter-Memoir-Marina-Whitm...