Hacker News new | past | comments | ask | show | jobs | submit login
Richard Feynman and the Connection Machine (longnow.org)
225 points by jonbaer on Aug 13, 2016 | hide | past | web | favorite | 32 comments



> To find out how well this would work in practice, Feynman had to write a computer program for QCD. Since the only computer language Richard was really familiar with was Basic, he made up a parallel version of Basic in which he wrote the program and then simulated it by hand to estimate how fast it would run on the Connection Machine.

I have most of Feynman's memoirs and a few of his biographies, but I only see scant references to his work in computing, possibly because they seem so trivial to his other accomplishments. That said, it would be interesting to read more about his computational work, given, as the OP says, that he was also very much a pencil and paper guy.


I haven't read it, but I saw it on a friend's bookshelf:

https://www.amazon.com/Feynman-Lectures-Computation-Richard-...


Highly recommend it! As typical with Feynman's writing it is both enlightening and entertaining.


Fantastic book, discussed everything from the basics physics of logic gates/computational theory to quantum computers. With problems to match.

In some ways it's an introductory text, but written in a open and accessible way.


If I am not mistaken this book was based off of his lectures at Caltech in the 80s , right around the time he was thinking about quantum computing.


You might enjoy his talk on what a computer actually is: https://www.youtube.com/watch?v=EKWGGDXe5MA


Great talk; hilarious how he keeps almost falling over,


You forget that Feynman was responsible for the 'human computer' during the Manhattan project. He was writing algorithms and executing them on the 'machine'. It was ground breaking work for that time.


Before 1946 "computer", without an adjective meant human computer. When they started constructing electronic ones in the 1940s, they where first called electronic computers to distinguish them from the human kind.


Agreed that Feynman was at the epicenter of scientific computing; before the Manhattan project, computers were only used for businesses.


The computer lab in Cambridge was founded in 1937, before electronic computers. At that time their equipment was mechanical calculators and analogue differential analysers. The purpose was scientific computing, especially theoretical chemistry. There are a few pictures here http://www.cl.cam.ac.uk/relics/archive_photos.html


> that he was also very much a pencil and paper guy.

I'm curious about this as well. In particular, I wonder whether Feynman wrote out his code on pencil and paper as that's something I've always fantasized about.


You may be interested in the algorithmic notation I designed for writing code with pencil and paper without the usual fuzzy thinking that accompanies pseudocode: http://canonical.org/~kragen/sw/dev3/paperalgo.


I'm probably biased by writing a lot of algorithms in the classical way, but I find your way a lot less readable and very unintuitive. Pseudocode is easy to transform to code; your notation, not so much.

I do applaud your determination of not liking the classical notation and coming up with something different. Maybe it's just that I don't like the syntax of math either (if math would be an API, it would be considered poorly designed, poorly documented and incomplete and inconsistent).


Yes, as I say on the page, I find it less readable, too. But it's much easier to write. And it isn't that hard to transform to code.


I must say i find it more readable than pseudo code; the reason being that all the keywords have been replaced with graphical elements and layout. This means that all the text is essential to the algorithm, which makes it much easier to read for me.


I'm glad you like it! Maybe I'll get used to it in time myself.

I do still use things like "argmin", which you could argue is a "keyword"; by argmin_{x ∈ S} f(x), I mean what you would express in Python as min(S, key=f), which ends up translating to something like this in C:

    int min_found = 0;
    T min;
    U min_item;
    for (iter it = first(S); hasNext(it); it = advanced(it)) {
       U item = itemAt(it);
       T val = f(item);

       if (!min_found || val < min) {
          min_found = 1;
          min = val;
          min_item = item;
       }
    }

    do_whatever_with(min_item);
Maybe this is what someone earlier meant when they said this paper algorithm notation was still too hard to translate into code, but I feel like argmin (over a finite set) is a sufficiently familiar concept that there's no need to spell it out in more detail, and indeed it's a standard library function in Python.


Go for it! There are definitely situations where I am unsure how to architect something and I'll just go to a coffee shop with a notebook and sketch out pseudocode while I think through different possible implementations.


I think software engineers pretty much wrote with pencil on paper in the punch card era. Operators then punched the cards. At least in some orgs. This is a bit different than what Feynman did, though, I think.


We generally started with flow charts, sketching them out on regular paper. Once we were ready to write the code, say in FORTRAN, COBOL, PL/I or assembler, we would write out the code on wide pads of paper that had graph paper like markings so that each character was written in a single box and it was easy to keep track of columns.

Every programmer I ever met punched his or her own cards. Key punch operators did exist but were more often employed to enter data at keypunch machines because data was often fed into computers via large stacks of punched cards. Data was virtually always entered in fixed fields on the 80 column punch cards.

The actual keypunching was done most often, during my time, on the IBM 029 keypunch machine. I actually started out on the IBM 026 keypunch, but the 029 was much more suitable for keypunching programs. (The 026, a 1949 design, had no plus sign or parentheses on their keyboards and required a kind of complicated shift where it took multiple key strokes to punch out the correct holes so that the Hollerith code of holes for the parentheses could be entered.) By the time I got to MIT everyone was using the much better 029.

Operating the 029 required first punching a card to be fitted on the control drum of the machine, it controlled tab stops an some other basic field skipping. One also had to learn how to clear card jams and how to quickly duplicate cards or make minor corrections in a card by duplicating parts of a card while inserting new punches at certain locations.

Once a few hundred or maybe a thousand cards were punched you carefully carried the decks in boxes to be assembled into trays of cards that the mainframe operators would take at a submission window. Then it was time for a break while you waited sometimes even overnight depending on your priority for the results, printed on wide sheets of fan-fold paper with alternating green and white stripes on it.

Careful desk checking of the code was required because turn around time was always several minutes and there were times for me when I would not get my output back for hours. Each bug or even syntax error meant starting over back at the keypunch to fix the error. One's source resided in your box of cards, it didn't remain on the machine after your run, successful or not, completed.


Terrific writeup. Thanks for submitting this. Pieces about Feynman are seldom disappointing (it's as if he continues to live through these inspired stories), but this one had lots more to offer as well.

Note that the markup was not quite exported completely, so [links] and $math notation$ are still in a raw syntax, which is confusing at first.


TL;DR : Read the (fning) article! The details matter!


>The notion of cellular automata goes back to von Neumann and Ulam, whom Feynman had known at Los Alamos. Richard's recent interest in the subject was motivated by his friends Ed Fredkin and Stephen Wolfram, both of whom were fascinated by cellular automata models of physics. Feynman was always quick to point out to them that he considered their specific models "kooky," but like the Connection Machine, he considered the subject sufficiently crazy to put some energy into.

I am currently starting round four in my studies of CAs, and it I thought this quote was interesting in bridging the earlier work by von Neumann and Ulam to Ed Fredkin and Stephen Wolfram with Feynman in the middle spanning them.

The book, "Cellular Automata: A Discrete Universe" by Andrew Ilachinski, has had it critics, but it is an amazing compendium to read.


Don't Fredkin and Ulam overlap by about 30 years?

Thanks for the book recommendation!


Here's a movie recommendation if you're interested in Fredkin, Wolfram, Computation, etc.: "Digital Physics"

If you watch it on Vimeo, iTunes, or Amazon and leave a comment you can get a free pack of cards (with gum!).

www.DigitalPhysicsMovie.com


"Connecting a separate communication wire between each pair of processors was impractical since a million processors would require $10^{12]$ wires."

Isn't it more like sum_(n=1)^999999(1000000-n) = 499999500000 ?

https://www.wolframalpha.com/input/?i=999999+%2B+999998+%2B+...

Since "one" wire connect in theory two processors?:)


Good luck getting all the extra bugs and timing issues out of a half-duplex setup like that.


I re-read this text in full every time it comes back up on HN and never stop to love it.


It's all about Biomimetic Cognition.


Could you elaborate on this statement? I'm intrigued...



sounds quite similar to https://en.wikipedia.org/wiki/Vector_space_model and related ideas




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: