
Claude Shannon at Bell Labs - woodandsteel
http://spectrum.ieee.org/geek-life/history/a-man-in-a-hurry-claude-shannons-new-york-years
======
keithpeter
Interesting links off the OA to the generation _after_ Shannon who implemented
a lot of the ideas.

 _> > Reflecting on this time later, he remembered the flashes of intuition.
The work wasn’t linear; ideas came when they came. “One night I remember I
woke up in the middle of the night and I had an idea and I stayed up all night
working on that.” <<_

Do people think that putting Shannon somewhere like the Institute for Advanced
Study would actually have quickened up his thinking? Or is a level of
distracting background activity actually helpful?

~~~
gwern
I don't think it would have helped, but I suspect it's less a distracting
background offering serendipity/weak contacts and more basic expectations of
finishing and communicating things.

The IAS has something of a bad reputation for fostering rest rather than
revolution (I think both Hamming and Feynman criticized it in rather strong
terms). You should also remember that Shannon _was_ given carte blanch at MIT
because he was so famous and respected, and that's exactly when his public
output went to zero. Not that he didn't keep himself busy (he did a ton of
stock trading and other things which is where Shannon's volatility harvesting
comes from etc) but without any kind of external constraint or direction... I
was shocked to learn that Claude Shannon died in 2001, because from how he
drops out of all histories in the '50s-60s, I always sort of assumed he had
died around then, relatively young.

(Probably a fair number of HNers could learn from Shannon's bad example:
shipping matters!)

~~~
chubot
Huh? He did more than a lifetime of work, and happened to do it early in life.
It sounds like he worked on whatever was interesting to him. If those things
weren't interesting to others, so be it.

I find it very odd that you would call this a bad example and relate it to the
relatively prosaic idea of "shipping". Shipping is good when you need
feedback, but that's not the kind work he was doing.

~~~
gwern
He did less than a lifetime of work because he did almost all of his work
early in his lifetime. Then he selfishly betrayed his agreement with society
and MIT by spending decades of tenure fiddling with his toys, and
underachieved his potential. He probably could have done so much more with his
talent if he had just some more structure to his life. _That_ is why he is a
cautionary example - he was a victim of his own success. You can have all the
talent in the world and casually revolutionize fields as different as genetics
and electronics when you bother to finish something, but if you don't put in
the work, nothing will happen and you will squander it all.

> Shipping is good when you need feedback, but that's not the kind work he was
> doing.

No, Shannon was a genius and didn't need much in the way of feedback. Shipping
is good because it requires you to complete the project and take it from 99%
finished & only 1% useful to 100% finished & 100% useful, and it makes it
available to the rest of the world, instead of buried in your estate's papers
for a journalist like William Poundstone to uncover decades later after it
would have been useful if it had been completed & published in a timely
fashion.

~~~
chubot
This is one of the odder things I've read on the Internet... I think you're
misunderstanding the nature of fundamental research.

Nobody knows in advance which ideas will be groundbreaking and what won't.
Most people who have made big discoveries spend huge amounts of time on things
that will go nowhere, or "toys". I'm pretty sure there's at least an entire
chapter of one of Feynman's books devoted to this.

People who make big breakthroughs tend not to be the kind of people who are
self-consciously trying to make big breakthroughs. They just do what they
want, guided by their own curiosity.

The whole point of tenure is to insulate you from pressure on what to work on.
Maybe some people at MIT were critical of Shannon; I have no idea.

But that's the nature of creativity. You can't manage it. If MIT wanted they
could get rid of the tenure system, but that would be a horrible idea.

If he didn't want to work on things he was "supposed to", so be it. The entire
theory of information wasn't something he was "supposed to" work on either.

Your idea of 'shipping' also seems to indicate a misunderstanding of the
research process.

~~~
gwern
I understand fundamental research just fine. I suggest you reread my comments
and perhaps also read the bio and earlier materials on Shannon like _Fortune's
Formula_. Finishing papers does not blight precious snowflakes like Shannon,
nor would it shatter his delicate psyche. Not finishing drafts of papers was
not critical to his genius and creativity. Shannon is far from the first
person to procrastinate and be passive-aggressive, and the remedies would have
been the same for him as for anyone else if his environment had been less in
awe of him and following Romantic ideas of geniuses like those you espouse
where their sacred solitude cannot be disturbed.

> Your idea of 'shipping' also seems to indicate a misunderstanding of the
> research process.

Publishing is pretty darn critical to the research process...

------
chubot
This sounds like a promising book. I read the _The Idea Factory_ a few years
ago, which is a related and fantastic book about the history of Bell Labs.

Around that time I came across an interesting idea. I don't remember if it was
in the Idea Factory, or in material I read afterward, but it's related to one
of central ideas from this excerpt:

 _The sender no longer mattered, the intent no longer mattered, the medium no
longer mattered, not even the meaning mattered: A phone conversation, a snatch
of Morse telegraphy, a page from a detective novel were all brought under a
common code._

The idea I came across is that:

    
    
        Shannon's information theory, devised at AT&T, indirectly led to the demise of AT&T's monopoly.
    

Before Shannon, there was no concept of the information-carrying capacity of a
wire. And AT&T's monopoly was largely due to it having the biggest set of
wires, which as you can imagine were expensive to deploy in the 19th/20th
century. I remember that calling California from NYC was a huge achievement,
precisely because of the number of physical wires that had to be connected.
AT&T was the first to offer that service.

So I think the argument was that it made economic sense for a single
organization to own all the wires, so it could maintain them with a set of
common specifications and processes. But if you can reduce every wire to a
single number -- its information-carrying capacity -- then this argument goes
out the window. You can use all sorts of heterogeneous links made by different
manufacturers and maintained by different companies.

(I'm not sure if this is historically accurate, but technically it sounds
true.)

So my thought was that there's an analogous breakthrough waiting to happen
with respect to cloud computing. Google and Facebook have information
monopolies based on centralized processing of big data in custom-built data
centers. Likewise, AWS has a strong network effect, and is hard to compete
with even if you have billions of dollars to spend.

So my question is: Is it possible there will be a breakthrough in
decentralized distributed computing? And could it make obsolete the
centralized cloud computing that Google/Facebook/Amazon practice? Just like
AT&T had no reason to be a monopoly after Shannon, maybe a technological
breakthrough will be the end of Google/Facebook/Amazon.

Maybe this idea is too cute, and you can poke holes in it on a number of
fronts, e.g.:

\- Shannon's ideas were profound, but they didn't actually bring down AT&T.
AT&T was forcibly broken apart, and there are still network effects today that
make re-mergers rational.

\- Centralized distributed computing will always be more efficient than
decentralized distributed computing (?) I'm not aware of any fundamental
theorems here but it seems within the realm of possibility. (EDIT: On further
reflection, the main difference between centralized and decentralized is
trust, so maybe they're not comparable. Decentralized algorithms always do
more work because they have to deal with security and conflicting intentions.)

But still I like the idea that merely an idea could end an industry :)

Relatedly, I also recall that Paul Graham argued that there will be more
startups because of decreasing transaction costs between companies, or
something like that. But it still feels like the computer industry is
inherently prone to monopolies, and despite what pg said, the big companies
still control as much of the industry that Microsoft did back in the day, or
maybe more.

~~~
zrm
> So I think the argument was that it made economic sense for a single
> organization to own all the wires, so it could maintain them with a set of
> common specifications and processes. But if you can reduce every wire to a
> single number -- its information-carrying capacity -- then this argument
> goes out the window. You can use all sorts of heterogeneous links made by
> different manufacturers and maintained by different companies.

> (I'm not an electrical engineer, so I have no idea if this is all true, but
> it sounds plausible.)

You're describing the internet. :)

> Is it possible there will be a breakthrough in decentralized distributed
> computing?

The hard problem is security. Right now you have to trust Amazon et al with
your data, which is not really what you want, but even that is better than
having to trust Some Guy running a host out of his garage.

This isn't a real problem for static content. That you can just encrypt, throw
it up on IPFS or similar and add federated authentication for access to the
decryption keys.

But data processing is something else.

There are people trying to solve that with encryption too, but it's hard, and
to a large extent equivalent to making effective DRM, which is not a desirable
thing for your problem to be equivalent to.

A different approach is to trust people who you actually trust. We're now at
the point that a fast processor consumes a single digit number of watts. So
you can run your own server at home, and so can your friends and family, which
allows you to pool capacity for load sharing and higher availability.

Then services become software you install on your trusted pool of servers.

There is no technical reason that can't exist, people just haven't done it yet
(or they have but the future is not evenly distributed).

~~~
chubot
You quoted me before an edit -- what I meant was: Did things actually happen
that way historically? That is, did engineers actually accept/design more
heterogeneity in the physical network as a result of Shannon's ideas?

I'm talking about networks with heterogeneous physical specifications -- that
happened long before the Internet. You have the same problem with just a plain
analog circuit-switched phone system. No digital computers involved.

Static content isn't really distributed computing; it's more like networking.
The type of breakthroughs I'm thinking of are more along the lines of block
chain, homomorphic encryption, differential privacy, zero-knowledge proofs,
etc.

In other words, different ways of collaborating over a network.

The thing that jumps out at me is that most of these technologies are
fantastically expensive computationally.

~~~
zrm
> Did things actually happen that way historically? That is, did engineers
> actually accept/design more heterogeneity in the physical network as a
> result of Shannon's ideas?

[https://en.wikipedia.org/wiki/Time-
division_multiplexing](https://en.wikipedia.org/wiki/Time-
division_multiplexing)

[https://en.wikipedia.org/wiki/Packet_switching](https://en.wikipedia.org/wiki/Packet_switching)

Shannon's paper was 1948. TDM was in commercial use in the 1950s. Even ARPANET
was in the design phase by the 1960s.

The thing about information theory is that you can use it in practice without
understanding all the math. TDM existed in the 19th century. You can even
study human language in terms of information theory, but that predates 1948 by
a million some odd years. And we _still_ don't understand all the math --
information theory is related to complexity theory and P vs. NP and all of
that.

As to whether heterogeneous networks would have dethroned AT&T, we don't know
because the government broke them up just as packet switched networks were
becoming mainstream. Moreover, the telecommunications market even now is not a
model for how market forces work. You still can't just go to Home Depot, pick
up some fiber optic cable, connect up your whole neighborhood and then go
collectively negotiate for transit.

It's a lot easier, regulation wise, to go into competition with AWS than
Comcast. And AWS correspondingly has a lot more real competitors.

> Static content isn't really distributed computing; it's more like
> networking.

It's distributed data storage, but yes, the solutions there are very similar
to the known ones for networking.

> The thing that jumps out at me is that most of these technologies are
> fantastically expensive computationally.

The solutions we use for networking and data storage trade off between
computation (compression, encryption) and something else (network capacity,
storage capacity, locality), which are good trades because computation is
cheap relative to those things.

It's _much cheaper_ computationally to send plaintext data than encrypted
data, by a factor of a hundred or more. But the comparison isn't between
encryption and plaintext, it's between encryption and plaintext that still has
to be secured somehow, e.g. by physically securing every wire between every
endpoint. By comparison the computational cost of encryption is a bargain.

But if you have to use computation to secure computation itself, the economics
are not on your side. Processor speed improvements confer no relative
advantage and even reduce demand for remote computation as local computation
becomes less expensive.

Where those technologies tend to get used is as an end run around an
artificial obstacle, where the intrinsic inefficiency of the technology is
overcome by some political interference weighing down the natural competitor.

The obvious example is blockchain, which would be insurmountably less
efficient than banks if not for the fact that banks are so bureaucratic,
heavily regulated and risk averse. But if banks start losing real customers to
blockchain they'll respond. _Hopefully_ by addressing their own failings
rather than having blockchain regulated out of existence, but either way the
result will be for blockchain to fade after they do.

~~~
woodandsteel
>The obvious example is blockchain, which would be insurmountably less
efficient than banks if not for the fact that banks are so bureaucratic,
heavily regulated and risk averse. But if banks start losing real customers to
blockchain they'll respond. Hopefully by addressing their own failings rather
than having blockchain regulated out of existence, but either way the result
will be for blockchain to fade after they do.

That's assuming it is technically possible for them to be cheaper and faster
and easier to use without blockchain technology. My non-expert understanding
is that it's not possible.

Also, from what I understand a key reason financial institutions are so
interested in the blockchain is it helps greatly with the trust and settlement
problems that at present take so much work and expense.

------
B1FF_PSUVM
> many of Shannon’s colleagues found themselves working six days a week.

Hmm. Was the 5 day work week common by 1940?

I have a notion that we went from Sunday off only, to Sunday plus Saturday
afternoon, then Sunday plus Saturday off. Not sure when that happened, or
where it started.

------
losteverything
What jumped out was the mere mention of the draft (wwii). I even looked it up.

The narrative my wwii relatives give is everyone enlisted. If you didnt there
was shame that required an explanation

~~~
tyingq
~60% of US military in the WW II time period were draftees.
[https://www.nationalww2museum.org/students-
teachers/student-...](https://www.nationalww2museum.org/students-
teachers/student-resources/research-starters/research-starters-us-military-
numbers)

Doesn't refute the idea that shame might have been involved, but...

~~~
losteverything
My history could be very distorted but the "war is bad and to be avoided" idea
came to life (in my lifetime) during Vietnam years.

I cant imagine the us population rallying around a new war like they did in
the '40's

~~~
tyingq
Only about 25% of US military in Vietnam were draftees. Suggests there were 2
distinct trains of thought on "war is bad".

------
dabber
Hmm, ieee.org still has Flash ads

[Edit]

Sorry maybe not a flash ad. Upon further inspection it was an iframe whose
contents were blocked by my ad blocker, presenting me with that grey box in
chrome. I'm not going to disable it to check what it actually is.

~~~
jasonkostempski
It looks exactly like AdFly at first glance. Thought I was being directed to a
Minecraft Mod.

