Shannon’s work has already produced a whole lot of course, but I don’t think we’re anywhere near seeing the full outcome of his contributions yet.
Currently our computers operate by filling the transistors with charge and/or dumping it to ground. Who even cares about in information-theoretic efficiency in that case? The cost of the actual work done is dwarfed by the ancillary cost of running the machine.
If we ever move on to something less brute-force, like reversible quantum cellular automata, I think we’ll see him as an invaluable part in the chain of formalizing what information and computation mean physically.
The suggestion to use the word entropy by von Neumann is a great story:
My greatest concern was what to call it. I thought of calling it 'information,' but the word was overly used, so I decided to call it 'uncertainty.' When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, 'You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage.'
It's a funny story, but now that we do understand better what entropy is it's clear that information and thermodynamic entropy are the same concept, so it was a very good call.
von Neumann was an alien sent to Earth when we were about to get/were in the process of getting nukes, to make sure we advanced fast enough to not blow ourselves up. And the wild thing is that you, the reader, are only like 75% sure I’m kidding and don’t actually believe this.
p.s. I started reading Tukey's textbook Exploratory Data Analysis just yesterday. It's wonderful so far, a pleasure to read. "It is important to understand what you CAN DO before you learn to measure how WELL you seem to have DONE it." "The greatest value of a picture is when it forces us to notice what we never expected to see."
I learned recently from Alvy Ray Smith's _A Biography of the Pixel_ that Kotelnikov[1] was first to discover sampling theorem before Shannon independently did. And Shannon made it accessible to the West. It's pretty fascinating history.
Not specifically a solution to the St. Petersburg Paradox because, well, it's not a solution to the most general formulation of it (and Bernoulli admits as much in his paper, IIRC).
At least the way I read Bernoulli's paper, it was more of a general musing on how to reduce risk and make insurance decisions.
I use software to help build the lists quickly. They're still hand-picked, though, because an automated solution tends to come up with a lot of duds (e.g. threads with no interesting commments).
Currently our computers operate by filling the transistors with charge and/or dumping it to ground. Who even cares about in information-theoretic efficiency in that case? The cost of the actual work done is dwarfed by the ancillary cost of running the machine.
If we ever move on to something less brute-force, like reversible quantum cellular automata, I think we’ll see him as an invaluable part in the chain of formalizing what information and computation mean physically.
Kelvin/Maxwell -> Shannon -> Landauer -> maybe Bennett