See  for a demonstration.
I watched a documentary from the 80ies a long time ago. A mathematician (can't remember his name) who worked with von Neumann in Los Alamanos was interviewed. He described von Neumann's last weeks in the hospital - the cancer had already metastasized into his brain. The mathematician said something along this lines (I am citing from memory): "von Neumann was constantly visited by colleagues, who wanted to discuss their latest work with him. He tried to keep up, struggling, like in old times. But he couldn't. Try to imagine having one of the greatest minds maybe in the history of mankind. And then try to imagine losing this gift. I was terrible. I have never seen a man experience greater suffering."
Marina von Neumann (his daughter) later wrote this about his final weeks:
"After only a few minutes, my father made what seemed to be a very peculiar and frightening request from a man who was widely regarded as one of the greatest - if not the greatest - mathematician of the 20th century. He wanted me to give him two numbers, like 7 and 6 or 10 and 3, and ask him to tell me their sum. For as long as I can remember, I had always known that my father's major source of self-regard, what he felt to be the very essence of his being, was his incredible mental capacity. In this late stage of his illness, he must have been aware that this capacity was deteriorating rapidly, and the panic that caused was worse than any physical pain. In demanding that I test him on these elementary sums, he was seeking reassurance that at least a small fragment of this intellectual powers remained." 
My Dad says that von Neumann liked to play his music loud and work late at night with no interruptions. I remember Edward Teller coming to a costume party at our house and my Mom and one of her friends had a white bird costume for him to wear. He refused to wear the bird’s head, just wore the white wings. The next day in Herb Cain’s newspaper column he incorrectly talked about Edward Teller wearing an angel of peace costume.
It feels terrible. When I first experienced it, I was terrified that it would be permanent because that would keep me from doing my job and keeping up with my interests and hobbies. Now I only get this brain-fog every so often. I think it helps me communicate with people better and has helped me learn patience. I finally understand how some people might be genuinely, earnestly trying to understand what I'm trying to say or teach, but can't understand it because I'm communicating it at the right level of analysis.
If you're intellectually gifted (many programmers are), you shouldn't take that for granted. You got lucky, and if you weren't lucky enough to be intelligent, it might have been impossible to do the same kind of work you do today. Please appreciate that.
Still it forces you to think at different levels of mental acumen and appreciate the differences people have in mental quickness. Though it’s also a challenge when people with constant “IQ” assume you don’t know a given topic if you’re in an off moment or day. Luckily the more experienced people seem to pick up on that. In the end communicating across different intellectual levels makes for a humbling challenge.
Could you explain it in simpler terms?
And I'm not even "old" yet...I guess it will pay off in the long run.
Lets not jerk each other off too hard, eh? Just like everybody in life, in any trade or profession, most programmers are just average people like everybody else. They just happened to luck out and be good at a talent that pays well.
Odds are incredibly high you aren't any smarter than somebody who paints houses for a living, fills your prescription, or plays football for the NFL. We are all just people trying to make a living -- don't ever forget that.
Programmers may not be wiser, kinder, or better than anyone else, but on average they are a bit smarter.
To many engineers are way to convinced that the fact they know any kind of engineering makes them automatically smart at everything outside of their narrow domain of expertise (eg: programming).
Programming is correlated with higher IQ.
IQ is correlated with spatial reasoning and the kinds of multi-step puzzle solving that is often needed in programming.
I don't know why you find this so surprising or offensive. It's simply that the IQ tests the kinds of things that often make good programmers.
IQ isn't the only thing correlated with programming ability, and people with high IQ are often extremely dumb in many areas.
(I don't have a particularly high IQ)
A few days ago I went to a grocery store to buy sparkling water. I got 16 small bottles and arranged them as a 4 by 4 grid before cashier, so she wouldn't need to spend time counting them and delaying the line. It took her over 10 seconds to count the bottles: apparently, she didn't recognize the pattern and counted them by groups of 3-4 bottles covering the counted ones with her hands to simplify the process. Programmers don't even need to multiply 4x4, they just see the answer. This scene hints that the cashier's analytical skills are next to none and this alone explains why she is a cashier.
Another example. There is a curious simple test to check your working memory capacity. Imagine a 3x3 grid and draw there words oil/gas/dry. Now read all the words that you see on the grid. Not all people have visual imaginations: some operate with graph-like structures and represent the grid as a set of logical statements: row1 is gas, row2 is oil, row3 is dry. This is fine as long as they do it efficiently. Most people in this task will resort to the snail analytical approach and will be thinking like: cell 2-1 is A, so cell next to the right is 2-2 belongs to word 2 which is oil and thus the second letter is I, so our current sequence is AI. Obviously it will take them ages to enumerate all the words this way. High level programmers can keep the entire grid in memory, either as an image or in symbolic form, and thus can enumerate words quickly. Why does this example matter? Programmers have to keep many objects and connections between them in memory.
If everybody had these raw cognitive skills, programmers would get the standard minimum wage. Same for accountants, lawyers, bankers.
Her numerical analytics skills perhaps, but she might be great at something else. Maybe she is a poet, or a musician, or great at handling 5 kids, or something else that you are terrible at.
Just something that doesn't pay well, so she has to be a cashier.
This is kind of a form of Moravec's paradox: we assume what is easy for us must be easy for others, and what is hard for us must be also hard for others too.
I can't find the reference right now, but this was a famous problem in early testing of monkey's intelligence. They showed the monkey's pictures of humans and they couldn't distinguish them, so it was assumed they were fairly dumb. But then eventually someone figured out they were much better at distinguishing pictures of other monkeys.
Of course it's still possible that the cashier is not good at anything, but in my experience that's very rare. Most people have some skills.
Intelligence is a very narrow and specific skill of discerning the real. It allows us to predict things. Coincidentally, it allows to make money and thus is well paid. It's ok if others disagree with me.
My two examples above are meant to hint that intelligence consists of two distinct skills: ability to keep a still detailed image in your mind (the 3x3 grid example) and ability to analyze this still image (the counting bottles example). The latter builds on the former because if the image is blurry in your mind, there is nothing to analyze (you can't see words in the 3x3 grid if it keeps floating away). We can go further and divide the two skills into say 10 distinct levels of mastery, define their characteristics, but the point remains the same: it's a steep ladder that has to be climbed if one wants to get this skill.
Poetry, music and handling kids are different skills and are of no help in climbing the ladder of intelligence. No, I agree that compassion and other skills can be as useful as intelligence, but they are different skills and have to be mastered separately.
Just my 2c.
I can't do this for shit and I'm a genius.
I guess I'm living proof that not all programmers are smart.
On the other hand even when I think of a three letter word I don't see the letters, I just kind of know what they are.
Your second paragraph is wholly unrelated since the OC isnt saying that every programmer is smarter than everyone else. And your first is trite humanist garbage.
“Just happened to luck out and be good at a talent that pays well”
And, coincidentally, that “talent” (abstract thinking, clarity and extended focus of thought) “””happen””” to be highly correlated with higher IQ.
Still rats in the race, just rats with a slightly higher score on a particular dimension.
John von Neumann. Dude is a genius. Einstein. Also. you? me? sorry. Doesn't matter if you can code. People can paint cars way better than you. They can make a perfect french fry. They can inspect a clogged sewer line way better than you. They know just the best ways to start an IV. Who is more intelligent?
Just because you can do $PROFESSION doesn't make you above average in intelligence. Period. Full stop. To think any other way (and for somebody to post a link to some BS article supporting their assertion) is completely absolutely 100% arrogant BS. Get off your high horse people.
> Evidence has been mounting for decades that for most non-sport domains and for most people,“natural talent” is not an absolute requirement for reaching high levels of expertise.
> Other than the sports that depend on specific physical prerequisites, few domains have hard genetic limits for expertise.There is a way in which natural ability might contribute to high expertise in non-athletic domains, but it’s not domain-specific “natural” gifts... it’s a natural ability for focused practice.
> Even at the very top levels in most non-sport domains, there’s little evidence that “natural talent” is a hard requirement. But where it might exist, it’s most likely to show up at the beginning of the curve and the very very very top.
(Ask yourself how well the writer could explain monty hall, or regression to the mean, or what's wrong with p-values, to get an idea whether they can possibly have a handle on the material.)
This is probably true for programming too.
But just because something isn't necessary it doesn't imply there is a correlation.
It's the causation that's interesting, though: do gymnasts become coordinated through practice, or do the most coordinated people go into gymnastics? I think the OP is arguing the former, and I think I agree. In that case, programming is not unique re: intelligence.
That was never claimed but a different claim:
FACT: The groups of people that work in certain professions definitely have _average_ differences in IQ if we can accept that IQ exists.
That is NOT to say that any individual mathematician can say that he or she is more intelligent that the rest of the population.
Understand the difference.
Well, this is true: intelligence does _not_ determine someone's worth.
Ultimately how intelligent you are is irrelevant providing that you _can_ provide value.
>Still rats in the race, just rats with a slightly higher score on a particular dimension.
Yes, but surrounded by other rats with a slightly higher score on a particular dimension.
But anyone can still learn how to do it: abstract reasoning us not reserved for the "intelligent", it's open to anyone who is of average intelligence at least.
Interestingly this implies that the average programmer does have higher than average intelligence (as measured by IQ scores).
The reasoning goes like this:
- At least average intelligence is required for abstract reasoning (and programming)
- This means the the distribution of intelligence for programmers has a minimum around that of the average value
- This means that the mean and median must both be above average.
That person you quoted was Edward Teller and the interview can be found at 54:58 .
I read somewhere that the young boy asking von Neumann a question at 6:13  is actually Bill Clinton. Can anyone verify this?
The top comment poster on the other video says it's his father, Bill Walters.
However, Clinton was born in 1946, and this video is from 1955, so he would have been 9. The boy in the video looks older than 9 to me.
There may be mental pathways that allow for love without attachment. I personally don’t see them yet; perhaps I need to learn more about Zen or Daoism or something. In my unenlightened state, the only way I see to avoid the pain of loss is avoiding love in the first place, which doesn’t look like a good idea.
That young man who wants to be a lawyer was actually a lot better prepared for this interview than the guy with the microphone!
Nice hypothesis, but not exactly backed up by history. So many different societies have made so many diverse contributions to innovation, I'd be pretty amazed if you could draw a cohesive line through all (or most) of them.
Also, separately, it's pretty difficult to separate "bloodlines", which I assume you mean genetically inherited traits, with socially inherited traits.
A great physicist is probably more likely than average to have offspring that are also great physicists. But is that because of their "blood" (i.e. DNA) or because the children grew up in a household exposed to physics at a much higher degree than average. The children's "blood" is an inherited DNA trait, but their upbringing is an inherited social trait.
The question boils down to the age-old nature vs. nurture argument. All signs seem to point to nurture being the far more powerful influence.
Genetic natural selection takes thousands of years, or at an absolute minimum multiple generations. Social selection occurs far more rapidly, often within a single generation. A person born in the 1950s that is genetically predisposed to manual labor may do well for the first few decades of their life, but as society changes and starts valuing white collar work more, they will do far worse. Their genetics didn't change, but society did.
Those that favor nature over nurture vastly under-estimate the time-scales which it takes natural selection to occur as compared to social selection.
But what I am saying is this, the highest levels of genius, I would argue, are categorically different than just highly intelligent people. I think theres something different about the way their brains are structured. I've interacted with some of the top minds in a few fields, and I never come off with the feeling that they are merely farther along some kind of "intelligence spectrum". It always feels as if their thinking is different, i.e. its source and methods are a different type of brain. I think theres a few mutations floating around in a few different pools of the population
The ‘more to the story’ is possibly just affluence and family stability, which are also culturally embedded.
aka regression to the mean
At a guess I would think these are the more important factors.
Being born into an affluent family gives access to higher likelihood of more varying influences and stimuli from an early age. Children are by their nature curious creatures, so having the possibility to satisfy their curiosity in more avenues should reflect on their later ability to absorb new information in these fields (because they already have an established baseline knowledge).
Family stability probably helps to support curiosity and emotional safety. When failures are treated as positive experiences ("what did we learn from this?"), as opposed to wasted effort, you are more likely to allow yourself to seek more such experiences.
Of course there are outliers. But over generations, I would expect more innovations and brilliant minds to emerge from families who can provide and support their offspring with the environment to flourish in their fields of interest.
I don't like to conjecture about cleverness and intellegence and especially not genetics. But if there is one thing these Jewish leaf nodes have in common it is a really good education.
Do note that he proposed a description but it wasn't his idea. https://en.wikipedia.org/wiki/First_Draft_of_a_Report_on_the...
> some on the EDVAC design team contended that the stored-program concept had evolved out of meetings at the University of Pennsylvania's Moore School of Electrical Engineering predating von Neumann's activity as a consultant there, and that much of the work represented in the First Draft was no more than a translation of the discussed concepts into the language of formal logic in which von Neumann was fluent.
"Differences over individual contributions and patents divided the group at the Moore School. In keeping with the spirit of academic enquiry, von Neumann was determined that advances be kept in the public domain. ECP progress reports were widely disseminated. As a consequence, the project had widespread influence. Copies of the IAS machine appeared nationally"
Based on what I could see from the patent, it looks like an exploration of using such elements in place of (then) vacuum tubes; it also seems like it might also use non-linear properties; somewhat of an "analog computer" in a way.
There also seem to be hints at these elements being used in an "artificial neuron" manner (hardware-based artificial neural networks and neurons were a topic of interest at the time).
Strangely (I may have missed it - I only skimmed the patent) the use of the transistor seems to be missing (again, not sure)...but if this is true, it may be because again - it seems to be exploring non-linear storage and response as memory and computational elements.
The ideas of using - say - capacitive and inductive elements for memory elements (at a minimum) was known back then; it was also known how to use inductive-only elements for amplification and switching purposes (google "magnetic amplifiers" - the tech goes back a long way). But this patent seems to be using both in a different manner for a combination computation and memory (again, similar to an artificial neuron).
It's a very interesting patent; in a similar scope as to Turing's writings on neural network systems. Thank you for bringing it to our attention.
Whitehead's quote from that article ("Everything of importance has been said before by somebody who did not discover it") is elegantly stating that it's unoriginal turtles all the way down.
I have to wonder what else gets washed away in all the myth-making about these guys.
- Reportedly, von Neumann possessed an eidetic memory, and so was able to recall complete novels and pages of the phone directory on command. This enabled him to accumulate an almost encyclopedic knowledge of what ever he read, such as the history of the Peloponnesian Wars, the Trial Joan of Arc and Byzantine history (Leonard, 2010). A Princeton professor of the latter topic once stated that by the time he was in his thirties, Johnny had greater expertise in Byzantine history than he did (Blair, 1957).
- ...conversing in Ancient Greek at age six...
- On his deathbed, he reportedly entertained his brother by reciting the first few lines of each page from Goethe’s Faust, word-for-word, by heart (Blair, 1957).
If he was able to recall ~entire pages of the phone directory on command, I wonder if he could also recall (near) verbatim text of all the novels he had ever read, to what degree he could do this, or to what degree he could at least comprehensively recall key points, facts, timelines.
I would think he would have spent some time speculating on how the brain stores memories, I wonder if any of his theories were ever captured in some form.
Given that he could be occasionally absent minded, I suspect that it had to be something that piqued his interest, but his sense of what was interesting was extremely broad.
He did in fact speculate on the workings of the brain in The Computer and the Brain, which is based on a lecture series he had planned out but did not deliver.
It was more in the context of automata theory, but as someone with an interest in AI, automata, and neuroscience, it was frankly rather dank.
A lot of the pioneering work was, and is enjoyable in part because it's original and speculative, so you don't have to master the literature to make sense of it, you can just pick a paper and go. I'd recommend reading Pitts and McCullogh, plus also Lettvin, but others might have some equally lit recommendations.
0. In the contemporary sense, c.f. "cool", "dope", or "excellent"; not dank like a root cellar.
1. vide supra
Of course, learning to play video games at a high level is trivially easy, as everyone in the field now knows.
The next challenge is, naturally, to make money doing this.
After the traditional thirty seconds of research before undertaking a major project, I determined that the only way to make money from video games is to become a popular streamer.
So now I am training an agent to generate video of it playing and reacting to an imaginary game and equally fictitious Twitch viewers, with a dataset drawn from the top Fortnite streamers.
The reward function is comprised of a blend of subscribers, donations, and (logarithmically scaled) misogyny in the chat.
Thus far, I've only managed to create some sort of window into hell, where the "game" consists of unceasing violence, murder after murder after murder as towers of mismatched material swell and fall in ever transforming locations on the isle while the chat endlessly subscribes, spams, and emotes in cackling glee and the superimposed webcam video features a... thing with too many eyes and hands screaming incoherently.
At first I thought it was a problem with my dataset, so I started watching some of the streams myself.
This has not yielded insight into the whole "nightmare vision" output of my model, but it has expanded my vocabulary on the twin subjects of combustibles and comestibles, which I feel is a reasonable trade-off for the sanity battering associated with this whole endeavour.
(mentioned briefly in the article as "Heims's book")
and maybe this one but I haven't read it.
Although the terms eidetic memory and photographic memory are popularly used interchangeably, they are also distinguished, with eidetic memory referring to the ability to view memories like photographs for a few minutes, and photographic memory referring to the ability to recall pages of text or numbers, or similar, in great detail. When the concepts are distinguished, eidetic memory is reported to occur in a small number of children and as something generally not found in adults, while true photographic memory has never been demonstrated to exist.
"He could speed through a book in about an hour and remember almost everything he had read, memorizing vast amounts of information in subjects ranging from history and literature, geography and numbers to sports, music and dates. Peek read by scanning the left page with his left eye, then the right page with his right eye. According to an article in The Times newspaper, he could accurately recall the contents of at least 12,000 books."
If someone had photographic memory then they could do this superimposition in their memory, but it doesn’t seem like anybody can.
I could imagine a "spot the difference" type of test would be a good one though. The fields of random dots are identical save for one dot, and you have say where it is. Something like that.
Not really, unfortunately. I think it would be fun to build a 3d printer that can replicate itself, but have not tried building any prototypes because besides being too expensive I also havent taken my research into that far enough. I am a very ambitious person and have big plans for the future! What I have done on this subject besides reading about it is very theoretical and abstract, its more of an aspiration right now than an active research project to be completely honest.
While there were a few prototypes in the 50s, the transistor killed it. A fully functional version of this computational architecture has never been built, to my knowledge:
This type of computer is also called a "parametron" (https://en.wikipedia.org/wiki/Parametron). A Japanese researcher named Eiichi Goto independently invented it around the same time as von Neumann, and developed it much further than von Neumann did, so the idea is more often associated with Goto than with von Neumann.
The basic idea is that if a nonlinear harmonic oscillator is driven at twice its resonant frequency, it will oscillate stably in either of two phases. The two phases represent "0" and "1". If the driving signal is switched off and on again, the oscillator will arbitrarily "pick" a phase to stabilize on. If it's exposed to an input signal from another oscillator as it's turning on, it will always pick the same phase as the input signal. This makes it possible to copy a "0" or "1" from one oscillator to another. If the oscillator is exposed to input signals from several other oscillators, it will pick by "majority vote". Finally, a NOT-gate can be built by inverting the signal polarity. This set of primitives is sufficient to build arbitrary logic gates and flip-flops.
Goto's paper has an excellent explanation and more detail:
Goto, E. (1959). The Parametron, a Digital Computing Element Which Utilizes Parametric Oscillation. Proceedings of the IRE, 47(8), 1304–1316. doi:10.1109/jrproc.1959.287195
(PDF available at https://sci-hub.tw/10.1109/JRPROC.1959.287195)
There’s also a quantum variant:
Ha! No, but the computer would resemble brain oscillations and harmonic integration much more than linear clock cycles. We still don't know much about how the brain uses rhythms and harmonies to compute, but it does (brainwave bands are octaves, I.e. harmonic doublings, 2.5,5,10,20,40hz)
One particular error in the article stood out to me: the Trinity test site is in White Sands, New Mexico, not Nevada. This was immediately noticeable because I've been to the Trinity site.
Von Neumann probes pop up in sci-fi. One of my favorite uses of them is in the Bobiverse books: https://www.goodreads.com/book/show/32109569-we-are-legion?f...
Plus we haven’t had enough time to make myths about the recent times. Of course the 20th century was incredible for physics, but we don’t know yet what will make the recent years special. We may, for example, in 80 years wonder about how it was that the early 21st century produced so many ambitious, large-scale experiments.
Einstein wasn't wasted away his years at the patent office.
Also you could say that the greatest minds of the 20th century all engaged in coming up with bigger and bigger explosions for the military.
And so on.
The silver lining is that many of these companies publicly release their research. i.e. Rob Pike and Ken Thompson may be working on advertising, but we can still benefit from Golang.
And this situation comes because all (or the great majority) of the easy theorems have been proved for most established branches of math.
This also means great discoveries are coming at a later age for mathematicians, as simply getting up to speed in complex fields takes years.
All of this implies it would be hard to have another Von Neumann today.
But for example, one could call the endeavor of classifying all the finite simple groups a big collaborative project.
Hungary had a great education system at the time.
Just a recent example: a Prezi.com founder decided to create an alternative private school to show and lead by example. They didn't get the accreditation this year. If you stick out, they shut you down.
It doesn't surprise me at all that the current political climate is not great for fostering great minds.
I have a theory that maths being taught as a skill, instead of an intuitive system, derivable from axioms as with Euclid may a cause.
I seriously doubt you could back this. You are generalizing from a single school. Might as well argue that socialist Hungary had a great education system because of Fazekas. Neither are true. I happen to have a maths teacher degree from a Hungarian university and we studied Hungarian education history and I learned much more about education systems later on my own (and this is not to say this university maths teacher course was a good one, quite the opposite). If you want to know what great education at the time looked like, read up on Summerhill -- it was founded in 1921 but humanistic education has been around for centuries.
What utter baloney! There were a few, very few special math classes that went against the system which delivered results. I went to one, I should know...
The city Lwów, or Lviv as Ukrainians now call it, had a great school and the city changed hands during war.
Also, Polish mathematicians from other universities played an important part in breaking the Enigma. They developed the Bombe, the cryptographic machine later sent to UK, which was then refined and used to break it.
More generally, I suspect it was something about the era and its culture that valued intellect and sciences. These days people like that are often put down as nerds. Leaders and extraverts are praised and set as examples. Celebrities are also a modern invention, I see them as something quite distinct from "stars". The only requirement to be a celebrity is to be popular.
Also, these days people would rather worship CEOs.
I hope you mean this in the literal sense, in that they are both people who have taken the scientific contributions of others for their own businesses, and somehow get the credit for work they never did.
We have since invented TV, computer games, quantitative finance and web frameworks. All these things cause massive brain drain.
Hardly surprising that he was one of the influences on the character of Dr Strangelove. Was he a remarkable genius, absolutely - was he right about everything, definitely not.
Consider that several nations had avoided war as hard as possible prior to WW2. Standing by as entire countries we absorbed by a hostile power. In the end it only resulted in much greater destruction. Many people like JVN saw the same theme playing out with the iron curtain, and the destructiveness of the weapons only increasing over time.
The thought process was:
"We need to have a destructive war now to avoid having an earth shattering war later."
There had just been two world spanning wars in their lifetime. They considered a third inevitable. If it was going to happen at some point, better that it occurs before world ending arsenals were constructed.
Edit: I'm going to ask you the same question I asked another commenter - do you think it would have been better for the US to have attacked the Soviets as Von Neumann and others wanted?
Their predictions on nuclear arsenals were correct. A nuclear war in 1965 would have been vastly worse than one in 1950.
What they got wrong is that great powers would successfully avoid war over the long term. Which at the time was a bet with very poor odds.
Herman Kahn being another.
Paul Boyer's assessment of the pastiche here omits von Neumann, though the environment was target-rich:
While exposing the dangers and dilemmas of deterrence theory, Kubrick also satirized contemporary military figures and strategists, probably including Henry Kissinger, the author of Nuclear Weapons and Foreign Policy (1957); physicist Edward Teller, the “father” of the H-bomb; the ex-Nazi space scientist Wernher Von Braun; and the bombastic, cigar-chomping SAC commander Curtis LeMay, who in 1957 had told a government commission assessing U.S. nuclear policy that, if a Soviet attack ever seemed likely, he planned to “knock the shit out of them before they got off the ground.” Reminded that U.S. policy rejected preemptive war, LeMay had retorted, “No, it’s not national policy, but it’s my policy.” Much of the strategic thinking that Kubrick critiques, and even some of the dialogue in “Dr. Strangelove,” came from the work of Herman Kahn of the RAND Corp., an Air Force-funded California think tank. Kubrick read Kahn’s work carefully, especially his influential On Thermonuclear War (1960). General Turgidson’s upbeat assessment of the outcome of an all-out nuclear exchange directly paraphrases Kahn’s analysis.
(Apparently he does mention von Neumann elsewhere.)
How someone can rationally advocate the cold blooded killing of millions is, frankly, completely beyond me.
For example, after Kennedy resigned himself to striking the sites in Cuba during the missile crisis it was someone else who told him he was wrong and reconsider. (see the documentary Fog of War)
To put a fine point on it, even after all the brinksmanship, analysis, diplomacy to resolve the Cuban missile crisis it was a long Soviet officer objecting to launch a nuclear armed torpedo in response to sounding depth charges (interpreted as an attack) from US ships. Pulling the trigger would have started a global thermonuclear war but it wasn't pulled, one officer decided that against a 2/3 vote in favor.
"Later that same day, what the White House later called "Black Saturday," the US Navy dropped a series of "signaling depth charges" (practice depth charges the size of hand grenades) on a Soviet submarine (B-59) at the blockade line, unaware that it was armed with a nuclear-tipped torpedo with orders that allowed it to be used if the submarine was damaged by depth charges or surface fire. As the submarine was too deep to monitor any radio traffic, the captain of the B-59, Valentin Grigorievitch Savitsky, decided that a war might already have started and wanted to launch a nuclear torpedo. The decision to launch these required agreement from all three officers on board, but one of them, Vasily Arkhipov, objected and so the nuclear launch was narrowly averted."
There are plenty of examples where, basically, a roll of the dice saved civilization during the cold war. The destruction of society during the cold war was prevented by luck as much as anything else.
The systems are still in place and even today the danger of "accidental" start of the complete destruction of the current civilization is completely real. I always recommend a book subtitled "Confessions of a Nuclear War Planner." I won't write the details so that this doesn't appear as an advertisement. It's worth reading the whole book to get the exact idea how fundamentally flawed the logic of those managing these systems is. Absurdly, they still think they will "win."
As the poet said "We will all go together when we go."
Edit: To be fair, in the event of an Able Archer '83 war we might have all have have wished that such an attack had happened (not me though, I'd be dead). So its not impossible to construct a timeline where it was the right thing to do. I'm just curious whether if you had been a decision maker back then whether you would actually choose to do that.
In fact, it's blanket rejection of the cold-blooded killing of millions that requires (at least for most naturalists/atheists) decidedly non-rational thinking --- i.e., adhering to relevant moral principles while disbelieving in moral realism.
Unlike the commenter below, I believe political reasoning is amenable only to a small degree to first principles, for complex, non-linear systems exhibit emergent behaviors not present when investigating the parts. Instead of analytical reasoning, one ought to practice more holistic thinking.
Here's an intro video on complexity, in case you're ever interested: https://www.youtube.com/watch?v=i-ladOjo1QA
In the 20th century, philosophy had Saul Kripke. Among other things, he published one of the seminal papers in the (at that time) nascent area of modal logic when he was 17. I have it on good authority from a professor friend that many - who know Kripke in a professional setting - regard him as having a sort of alien intelligence.
Not to take away from Von Neumann, but there are at least a couple people each century who have these sorts of alien cognitive skills.
Also, while there is probably some truth to them having a sort of alien intelligence, we can't disregard the contexual factors of their success. Von Neumann lived in a unique academic-historical period. A period where one could make lasting, foundational contributions to a variety of disciplines.
I just read the Marcus paper from 1961 that is essential to Smith's thesis. I also read the back-and-forth rebuttals between Smith and Soames. I read the Marcus paper before reading the Smith and Soames papers.
I don't really see how Soames' primary claim is not likely true. He says:
"Marcus, along with certain other philosophers, do deserve credit for anticipating important aspects of contemporary theories of reference. However this credit in no way diminishes the seminal role of Saul Kripke."
When reading the Marcus paper, you really have to start stretching and expanding her arguments if you want to claim that she did more than anticipate 'important aspects of contemporary theories of reference'.
It should also be noted that Timothy Williamson (Oxford) has been one of the staunchest advocates for the proper appreciation of the work that Marcus produced, and yet he doesn't agree with Smith.
But really, this is all probably secondary to the issues surrounding Kripke's importance. Naming and Necessity - like most paradigm shifting works - was not a one-trick pony. Kripke expanded on his possible world semantics, introduced distinctions like metaphysical vs epistemic necessity, laid waste to any residual belief in the merits of logical positivism, came up with the first succesful (at least, most see it as succesful) argument for the existence of synthetic a priori
truths, etc. Moreover, Kripke came up with at least two fairly water tight arguments against the descriptivist theory he was going against. If Marcus was the first person to introduce this new theory of reference, than the theory was stillborn. Kripke (if we take him as having taken the theory from Marcus) actually explained the ins-and-outs of the theory, provided associated puzzles, addressed counterarguments, related it to other issues in analytical philosophy, etc.
Lastly, Naming and Necessity was not the only impressive work of Kripke's. We would have to include his work on modal logic as well as his work on Wittgenstein. There are probably a number of puzzles and counterarguments that were never published that should be included as well. For example, Kripke once attended a conference on personal identity where a philospher had just presented a new argument in his talk that elicited a standing ovation from the rest of the philosophers in the room (this basically never happens at conferences). Kripke was asked to come up and comment on this new argument. He came up and provided a water tight refutation of it. Everyone in the room was taken back by this.
His name? Albert Einstein.
Just kidding. But do you have any text of this exchange?
Does Kripke describe a query language in Naming and Necessity?
: Indeed he was so confident in his work he believed he was "Starting and ending" game theory, that there would be little else to work on after his monumental work that is OTGEB. He was very wrong of course, but like other theories of the kind (Information Theory comes to mind), this early lead is very significant to the creation of a coherent field.
For this reason general AI may offer us great insight into our mortal coil.
I grew up near SSFL so have some stories too.
It’s specifically your brand of risk downplaying that led to all these early deaths. Things are much safer today because educated folks understand the risks and prepare for them.
The idea that his cancer came from fallout is just moralizing nonsense, the silly logical error of thinking that the universe operates in a way that punishes badness.
I think you're just waving your hands here because you have no evidence to support the claim.
And the likely cause wouldn’t be some gamma rays, it’d be from ingesting radioactive or chemically harmful matter.
1) Do you know of any information like this? Link?
2) Otherwise: do you think in a way that's different (better) than the average person?
Also he read very fast, the librarians though he did not like the books that's why he was returning all of them every next week.
Also what usually geniuses can do is process information a lot. Read and write many papers. Eg Terrence Tao. His blog is always going strong, the polymath projects, plus regular teaching, plus his own (other) research.
Alexander Grothendieck, wrote books full of brand new ideas about very abstract math. Amazing insights. Ability to work with unfamiliar cutting-edge things while keeping focus, while communicating them to others.
There is a comment here, saying that many hard theorems require one to build a complex branch of math and use it to prove a single statement. So I asked myself: what does it really look like to prove a theorem at such level?
I can tell what's going on in a programmer's mind. Software is very much like an imaginary mechanism and software engineers are mechanics. For example, this site is a database connected with a html page, so a programmer literally imagines a big gray building that means the database, another building that means the html page and a pipe that connects them. The database has a few tables: one for user accounts, another for posts like this, another for comments. So a programmer imagines 3 big blocks inside that building that are connected with pipes transferring data. Next to the database there is a controller device that sends and receives messages in the pipe connecting it to the users. This analogy continues to tiny things like classes, methods and variables. The entire HN forum looks like a big multi-dimensional city-like structure with numerous pipes connecting pieces together. Experienced programmers not only organize this city well, but can also predict and eliminate complexity, e.g. they know that if you put that kind of building over there, others will inevitable connect to it tens of pipes and the entire city will be a mess, where you see a pipe and have no idea what it connects to and what will happen if you cut it.
I imagine there are programmers who build objects by imagining people asking them questions and thinking what the objects would answer, or all kinds of other things. People for whom Spring's Repository/Service/Controller/View are real things with personalities or flavors. I tend to think in a mix of "If I could just tell the computer what to do in English, what would I say?" (function names) and "Whose responsibility is this? Is this really their job?" (classes).
I find myself in a similar situation with development. There's an unescapable feeling that things could be created or improved. The feeling doesn't go away until I've done the work.
So I think what you're talking about is more so dependent on the person's mind rather than what they're doing. Like Conway's law, I think usually systems grow to mimic the developers' way of thinking, whether those are mathematical systems or software systems.
What are the equations that govern an atomic bomb? I don't want to say that I am asking for a friend... /s
I assume that by now the relevant equations would be well known anyway?
EDIT: found it at the end of the article
For anyone interested in learning more about the life and work of John von Neumann, I especially recommend his friend Stanislaw Ulam’s 1958 essay John von Neumann 1903–1957 in the Bulletin of the American Mathematical Society 64 (3) pp 1–49 and the book John von Neumann by Norman Macrae (1992).
Given the success of “The Imitation Game” and “The Theory of Everything” I’m disappointed that no one has made a compelling movie about Von Neumann who I think is a far more interesting character both mentally and in terms of personality.
John Von Neumann and Norbert Wiener : from mathematics to the technologies of life and death by Steve J Heims
Here is a review by John McCarthy: