Hacker News new | comments | show | ask | jobs | submit login
Marvin Minsky dies at 88 (nytimes.com)
1259 points by joelg on Jan 25, 2016 | hide | past | web | favorite | 178 comments



I met Minsky once in February 1987, at a rally in Las Vegas to protest the Nevada Test Site.. A lot of famous people (Carl Sagan, Barbara Boxer, Tom Downey, Ramsey Clark, Martin Sheen, Kris Kristofferson) were there, but I'd gone to meet Minsky.

I had taken along a (quite early) copy of the GNU Emacs manual. The FSF was selling them, but I'd put this one together myself. Running TeX on the texinfo source, converting the output for the Imagen printer, and then taking it to Kinko's to be spiral bound, including my imitation of the yellow cover that the FSF version had.

I asked Minsky for his autograph. He looked at what I presented, understood what it was, an autographed inside the front cover, "Marvin Minsky, friend of Stallman".

In April of 2011 in an airport in Honolulu, I presented that same manual for an autograph to Richard Stallman. He looked my manual over for a long time. IIRC, it documents Emacs version 16 or 17. Then he signed it, below Minsky's autograph, "Richard M. Stallman - Friend of Minsky"

RIP, Marvin.


Thank you for sharing. Anecdotes like this is why I love visiting HN. They also help me humanize some of my CS heroes. FWIW, I'd also love to see the autographs.


Amazing. If possible, can you please share a picture of the autographs?


Yes. Please please please do share the picture of autographs.


Interesting. But I wonder, was it a coincidence that you had the manual with you in Honolulu ? Or do you take the manual with you everywhere you go ? :)


I'd actually paid to fly Stallman to Honolulu to attend pfosscon. https://www.youtube.com/watch?v=E-lP8lPYPDU

While he was in Hawaii, I also paid to fly him to the "Big Island" (Hawaii) to speak with people (mostly astronomers and their children, though some people drove up from Hilo.)

https://blogs.oracle.com/barton808/entry/my_travels_in_hawai...

Since I was paying for his ticket, I knew he would be at the airport.

I lived in Hawaii at the time.

Back in Las Vegas (where I lived at the time), the book, and emacs, were new to me. I took it to read then.


Do you still have it? I'd love to see a picture


I'm looking for it.


In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

"What are you doing?", asked Minsky.

"I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.

"Why is the net wired randomly?", asked Minsky.

"I do not want it to have any preconceptions of how to play", Sussman said.

Minsky then shut his eyes.

"Why do you close your eyes?" Sussman asked his teacher.

"So that the room will be empty."

At that moment, Sussman was enlightened.

RIP.


Danny Hillis: "The first time I met Marvin Minsky: I walked into his office, I was very intimidated. He was sitting there, he was throwing wadded up pieces of paper at a wastebasket across the room, and doing a terrible job of it. He's missing it, they're all falling short. So I watch this for a while. Then he looked up at me and said 'ah! I forgot! It's one half em gee squared!'"

- From his talk On Game Software Development, in 2001 I think (from Technetcast.com)


Maybe I'm wrong, but that sounds like he was trying to get the formula for kinetic energy, which would be half em vee squared.

Just to be clear: I'm not questioning the story, just the details in the recollection :)


I did wonder at the discrepancy at the time, so I re-listened to the audio several times carefully when I was writing the transcript above. To me, he is clearly saying gee rather than vee (could be an artifact of the recording though).

I'm sure Hillis knows more physics than I do (though I knew the equations of motion pretty well at one time), but he could easily have just mis-spoke. I didn't pursue this line of thought, but considered it might have been something to do with him deriving an expression for the vertical position in a gravitational field, perhaps in terms of horizontal motion or something.


Will robots inherit the earth?

Yes, but they will be our children.

--Marvin Minsky http://web.media.mit.edu/~minsky/papers/sciam.inherit.html


Amazingly, according to this 1981 interview in the New Yorker, Minsky's first neural net was itself randomly wired!

http://www.newyorker.com/magazine/1981/12/14/a-i

"Because of the random wiring, it had a sort of fail-safe characteristic. If one of the neurons wasn’t working, it wouldn’t make much of a difference—and, with nearly three hundred tubes and the thousands of connections we had soldered, there would usually be something wrong somewhere. In those days, even a radio set with twenty tubes tended to fail a lot. I don’t think we ever debugged our machine completely, but that didn’t matter. By having this crazy random design, it was almost sure to work, no matter how you built it."



Found this original version copied from this story (original source is a dead link) [1]

"So Sussman began working on a program. Not long after, this odd-looking bald guy came over. Sussman figured the guy was going to boot him out, but instead the man sat down, asking, “Hey, what are you doing?” Sussman talked over his program with the man, Marvin Minsky. At one point in the discussion, Sussman told Minsky that he was using a certain randomizing technique in his program because he didn’t want the machine to have any preconceived notions. Minsky said, “Well, it has them, it’s just that you don’t know what they are.” It was the most profound thing Gerry Sussman had ever heard. And Minsky continued, telling him that the world is built a certain way, and the most important thing we can do with the world is avoid randomness, and figure out ways by which things can be planned. Wisdom like this has its effect on seventeen-year-old freshmen, and from then on Sussman was hooked.]"

[1] http://spetharrific.tumblr.com/post/26600309788/sussman-atta...


That sounds like the version told in Levy's Hackers. Maybe someone in HN-land can verify that; I can't find my copy.


Someone posted this elsewhere in the thread:

https://web.archive.org/web/20120717041345/http://sch57.msk....


Verified. Chapter 6, page 117 in my paperback edition.


I had no idea that koan was a (mostly) true story. Thanks a lot for posting.


Yes, I had once asked GJS about that exactly that story, and he confirmed it (though maybe not the exact words).


I always loved this TED talk of his

https://www.youtube.com/watch?v=RYsTv-ap3XQ


This is not the Prof Minsky I remember from 1970s where I was a MIT student. His mind was laser sharp then and could drill into the core of any problem.


It doesn't look to me like he's unfocused or forgetful on the video. What are you referring to exactly?


It is now several decades later.

Do TensorFlow/CNN builders use random initial configurations, or custom designed stuctures?


There is another deeper meaning in this koan.

It's related to the No Free Lunch Theorems. It basically says that if an algorithm performs well on a certain class of learning, searching or optimization problems, then it necessarily pays for that with degraded performance on the set of all remaining problems.

In other words, you always need bias to learn meaningfully. More you have (the right kind of) bias, faster you can learn the subject in hand and slower in all other kinds. In neural networks the bias is not just the weights. There is bias in the selection of random distribution of the network weights (uniform, Gaussian etc.) There is bias in the network topology. There is bias in the learning algorithm, activation function, etc.

Convolutional neural networks are good example. They have very strong bias baked into them and it works really well.


Usually random drawn from a certain distribution (ex: Gaussian with std. deviation 0.001, or a std. deviation dependent on the number of input/output units (Xavier initialization)).

For some tasks, you may wish to initialize using a network that was already trained on a different dataset, if you have reason to believe the new training task is similar to the previous task.


NN weights need to start random because otherwise two weights with exactly the same value can get "stuck" and be unable to differentiate. Backpropagation relies on starting random patterns that kind of match so that it can fine tune them.

But the weights are often initialized to be really close to zero.


Given the era though, Sussman may have actually been working with a neural net that's not the typical hidden-layer variety. "Randomly wired" could be a statement about the topography of the network, not about the weights.


There is no evidence he was actually working with a neural net.

https://web.archive.org/web/20120717041345/http://sch57.msk....


I had no idea this existed; it's brilliant!


If you start with the same weights, then the neurons with similar connections will learn the same things. Random initialization is what gets them started in different directions.


Random weights but the spacial organization of inputs follows the input geometry


The key is what learning procedure is used. It is not clear from the story if the nets were learning, and if so how.


Honest question: What does this koan mean? Specifically, what is closing one's eyes analogous to in the neural network? What is the recommended alternative action?


Somewhere else is the thread is a closer rendition of how the actual exchange went. Using that, I finally understand the koan:

Closing his eyes did not make the room empty. It made him not know which things were where.

Randomizing the neural network did not remove all the preconceptions from the network. It made him not know what the network's preconceptions were.


I had the chance to walk with Marvin Minsky down a hallway once, and I asked him what he thought of Bayesian reasoning. He said that it seemed to him like it was still part of a general trend away from tackling the central problem in AI. I said I didn't think so, but he seemed tired so I didn't try to go into detail.

There's an urban legend that I once got into a fistfight with Marvin Minsky, which does about as well as anything to illustrate the crazy, crazy things that people have been known to believe about me.

We have temporarily misplaced a great mind. See you later, Professor Minsky.


> We have temporarily misplaced a great mind. See you later, Professor Minsky.

This kind of statement gives me great hope, and in particular represents the kind of fundamental mindset change that helps counter many of the painful aphorisms commonly pulled out when someone dies. I find it deeply unfortunate how rarely it applies, but as mentioned elsewhere in the thread, it applies here. Thank you.


I think it's more in reference to him being cryo-preserved. According to one of the other comments on this thread.


Resurrection from cerebral hemorrhage seems unlikely, though. How would they do it - wash it out? A brain that thought about the brain for a lifetime, wiped out in minutes.


I know, it's very unfortunate. And who knows, maybe the damage is irreversible even with potential and futuristic medical advances, simply due to information loss.

But, as someone who plans on being cryo-preserved eventually, I'd say that whatever chance there is of reviving whatever remains of this individual, it should be taken. I'd want to live in the future, even if that meant not having my full cognitive abilities. Maybe not as very cognitively-impaired individual, but I guess I'll put that type of stipulation in the contract if I was worried about it.


You just made me remember a scifi story I read probably 10-15 years ago that I really wish I could recall the name of, as I'd love to read it again. In the story a man is awakened from an especially long cryogenic sleep, and his brain was implanted into a new body since the old one was no longer viable. In addition, he was given a brain implant with a screen in his eye that could be activated by a couple of quick blinks. At first he has the mind of a child, but slowly regains his faculties over many weeks or months, with the aid of the implant.


How about a cybernetic creature that is 30% Minsky? Is it much worse then current approach - passing down DNA and spending 18 years on training?


I'd find the thought of only having 30% of your brain left functioning pretty scary - rather be dead than barely alive.

Everything eventually ends, I don't see the appeal in pushing that only to suffer.


I think you overestimate the amount of brain matter required to maintain personality.

It is possible to lose an entire brain hemisphere and retain complete functionality (see https://en.wikipedia.org/wiki/Hemispherectomy).

There is a huge amount of redundancy in the brain, and most brain matter is only concerned with I/O, signal processing, and life support. It's one of the reasons I have a lot of hope that cryonics is feasible -- massive loss of brain tissue need not mean irreversible loss of an individual.


30% Minsky does not imply that there isn't a fully functioning intelligence, only that a mere 30% of Minsky is part of them.


I'm aware; that was what I was referencing in the second to last sentence of my post, about it applying far too rarely but applying in this case.


I keep thinking about an article I read a couple of years ago about the two schools of thought in AI: the now popular Conventional AI and (I believe) Computational AI. The article was mainly about the lead proponent of computational AI and how, after he helped give birth to the AI field, he was in effect being ostracised because he didn't think conventional AI should be considered "intelligence" as we generally think of it. I'm paraphrasing of course but I'm wondering if Marvin Minsky was the subject of that article? It has been nagging me for a couple of months now and I just don't remember where did I read it or the name of the subject.


What you're asking makes me think of Dijkstra's quote "the question whether machines can think as relevant as the question whether submarines can swim."




That seems to be the one thanks! I think that the article had some "code samples" and this one doesn't... but I may be confusing it with another article about quantum computing.


Interesting fact: Minsky is an Alcor member[1], so he's probably being cryopreserved right now. Though if he died from a cerebral hemorrhage, I'm not sure how well they'll be able to preserve his brain.

1. https://en.wikipedia.org/wiki/Alcor_Life_Extension_Foundatio...


One can hope that they'll make the attempt regardless. Alcor's position is to carry out their directive from a member regardless of third party opinions on viability where they can, as having the reputation for doing this minimizes the very real problem of interference from family members (for reasons economic, religious, etc). Also it is very hard to say at the time (as time is critical) how much damage is done via fatal brain injury of this nature, and of course at this point next to impossible to say what that will do the the odds and difficulty of future restoration.

Brain injury kept Roy Walford from being cryopreserved, though there it was clearly an extension of his own thoughts on the matter: http://www.cryonet.org/cgi-bin/dsp.cgi?msg=24045 I see that as a terrible shame; it is guessing in advance as to the limits of what can be restored.


Agreed. Not trying guarantees failure; trying leaves some possibility of success.


Agreed...given the resources and money there's nothing wrong with hedging your bets...


Was his cerebral hemorrhage a sudden/unpredictable death? If not, I'm surprised he didn't first relocate to cooperative hospice care in Scottsdale near Alcor to cryopreserve himself before damage became (possibly) irreversible.


And spend his last days away from his friends, family, and colleagues, away from his wonderful house and his piano? No, I think that even with foreknowledge he would have chosen to die in Boston.


And ruin his obviously intentional cryopreservation plans? No, I think that even with foreknowledge he would have chosen to do exactly what he chose to do and be cryopreserved.


Under "Policies and Procedures" it says that he's not only a member, but on the Scientific Advisory Board.


Who knew there was "science" involved in those things.

I mean apart from the technical expertise on freezing etc, the rest is mostly wishful thinking about how such a facility will survive long enough in tact (and the US wont itself go the way of the Assyrian, the Persian, the Roman, the British and other empires, in 1 or 2 centuries time), technical/medical resurrection will be possible, and future people will care to resurrect those in there.

Even if a great current mind was preserved there, if people of the future are, say 2x brighter than us (not to mention having access to advanced AI) it would make little sense want to resurrect them for that alone. And as for having access to 20th-21st century info, with our trillions of bytes of video, images, texts and sound recorded every day, they'll likely want LESS, not more information about our times.


Ironic that he died from cerebral hemorrhage!

EDIT: I find it ironic - as he was my first exposure to the science of mind and thought etc... I find it ironic that his own death was due to a failure of the brain/mind in some way given how much he has contributed to the idea of thought and mind in his career.


I audited a class on policy at the Harvard Kennedy School in October of 2015, and instead of going to class one day, we listened to a speaker (Jaron Lanier). After the talk, I stuck around in the front row and eavesdropped on people asking him questions. Eventually, one person came up to his side and asked "are you going to Marvin's house tonight?" I thought this person may have been talking about Minsky! So after Jaron responded with a "maybe" I approached this man and asked him if he was. And he did mean Minsky!!

This man and I started talking about intelligence, ML vs. symbolic, and more... he truly knew many intricacies of AI! Eventually, for some amazing reason, out of nowhere he asked me if I wanted to come to Marvin's house that evening! Of course I said yes! At the time, the only paper I had on me was ironically Patrick Winston's thesis printed out in my backpack, so this man wrote the name "Henry Lieberman" (a colleague of Minsky's) on the cover and gave me Minsky's address!

I went to Marvin's house that evening, and it was simply wonderful! We talked about SoM, and I was included in these discussions and was treated like a colleague. Marvin answered all my initial questions, but only created more within me! He engaged me! I really felt included. It was one of luckiest days in my life.

I'm sharing because I'm reading other stories about people's encounters with Marvin, and while I was reading them I didn't feel as sad. Perhaps mine might do the same for someone somewhere.


One amazon review for The Society of Mind says "The book has nothing to do with artificial intelligence. It has everything to do with speculation. And since no one has builty the society of mind from this blueprint in over 15 years, what does that tell us about the usefulness of the idea??" A"Marvin Minsky" replied: "It tells us that most AI researchers are still looking for oversimplistic solutions to problems. This reviewer does not understand that new ideas often take 2 decades to spread, because most practitioners in most fields don't oftenmake changes in their careers. The ideas in my 1961 "Steps toward Artificial Intelligence" became general practice aroun 1980, and those in "The Society of Mind," are only now (2007) becoming widely adopted."



While I was a student at MIT, I heard a rumor that Prof. Minsky in the 1960's thought computer vision was such an easy problem that he assigned it to a first year undergrad.

I asked him if it was true, and he said that it wasn't true that he thought the problem was easy, but it is true that he had a first year undergrad student that he decided to put in charge of his grad students working on the problem. The first year student was Gerald Sussman.


Minsky helped design one of the coolest musical gadgets I've ever come across, the Triadex Muse. Being a sort of self-generative music box, Minsky imagined a future where families would gather around such musical machines instead of turning to boring old television for their entertainment and relaxation.

https://en.wikipedia.org/wiki/Triadex_Muse


RIP. I will always hold him as an inspiration.

Of interest: https://en.wikipedia.org/wiki/Neats_vs._scruffies

I find it interesting because Minsky did a lot of the foundational work in Neural Network research yet he philosophically identified as the opposite on the Neat/Scruffy spectrum of most NN researchers today. Much like Bayes, I think there is some immense wisdom from his research that will not even be acknowledged as wisdom for decades.


I reckon Scruffy is good for inventing a new field, Neat is good for pushing it to the limit.


I think that's interesting as well. I would have guessed he was a Neat from his talks and his admiration of AGI because it implies he agrees there will have to be a succinct theory of general intelligence/consciousness.


It depends a lot on where you put the limit between neats and scruffies. I personally would have put him straight in the center, as more of a pragmatist. One of the big criticisms he had (from my memory of the Society of Mind lectures) is that some researchers spent too much time trying to find an overarching, simplistic theory of mind (some sort of physics-envy), which would put him in the scruffy camp. I do also remember him saying that trying to replicate the brain was pointless and that people should focus on trying to replicate its function rather than its architecture, which I am assuming is a criticism of the connectionist approaches, which would put him in the neat camp.


I would probably guess that he believed in an eventual theory of general intelligence/consciousness, but probably not that it will be easy/succinct/simple. For example, Neural Networks have made extreme breakthroughs in sensory perception and classification, partly through modeling how humans perceive and classify. But do those theories extend over to other areas of intelligence like Planning? Creative capabilities? Emotion? Empathy? Especially if we know that there are other players (such as hormones, genetics, epigenetics, culture, etc.) beyond Neurons in those areas? The fact of the matter is that there are countless factors in how intelligence develops, let alone influence the development of all of the auxiliary functions for intelligence (such as memory, reflexes, sensory, etc.).

EDIT: You can get a little intro to his thoughts on the matter starting about 27:16 in this video [1] (linked at the time marker). If you watch for about 10 minutes, he demonstrates some of the difficulties of using single abstractions for something as complex as human intelligence.

[1] https://youtu.be/-pb3z2w9gDg?t=1636


Besides his contributions to computer science, his invention of the confocal microscope profoundly affected biology research and is still in wide use.


Wow, I didn't know that he invented confocal!


I got to spend some time talking with him once when we were both visiting the offices of OLPC in Cambridge, more or less across the street from MIT's building 32 where CSAIL is located. I told him a little about what I was working on (learning agents attached to dialogue systems) and he had some insights. I ran out to the coop to buy another copy of The Society of Mind and he signed it. We also talked a little bit about some of the ideas in his The Emotion Machine, which I hadn't read at that point.

On top of his work, Minsky taught me that you should meet your idols. If they're worth it you walk away enriched and invigorated.


Oh man... This is really sad news. I mean, don't get me wrong, ANY death is sad news, especially for that person's friends and family. But while I never knew Marvin Minsky personally, I've felt his influence on my life for a long time. AI has always been one of my favorite subjects, and he's one of the forefathers of AI research and his presence looms large in the life of anyone connected to the field. So this feels like losing an old friend.

Not to mention that he was a brilliant mind, and his loss is a loss for humanity at large.

Anyway, RIP Mr. Minsky.


I think Minsky was the last living giant of AI that attended the 1956 Dartmouth Summer Research Project on Artificial Intelligence, which many cite as being the beginning of NLP, computer vision, machine learning, etc.

Sad news...


More than an attendee, he was one of the proposers.

The proposal:

http://www-formal.stanford.edu/jmc/history/dartmouth/dartmou...


Trenchard More of Dartmouth was also a participant and is, as far as I know, still alive



Favorite paper of his:

Why Programming is a Good Medium for Expressing Poorly Understood and Sloppily-Formulated Ideas

http://web.media.mit.edu/~minsky/papers/Why%20programming%20...


What a sad day, he was truly one of the great minds of the twentieth century. Inspiration to generations.

P.S. Web of Stories has an extensive, autobiography style interview with Marvin Minsky [1].

[1] http://www.webofstories.com/play/marvin.minsky/1


Some great sentences from Minsky:

No computer has ever been designed that is ever aware of what it's doing; but most of the time, we aren't either.

In general we are least aware of what our minds do best.


As a grad student, I remember shooting the bull with Minsky and Hillis (1983). The CIA had offices across the hall and we were discussing how we might grab one of their bags of shredded documents and use a computer to piece together all the fragments by hashing the pattern of edge fragments on each piece.


Which later became a real-life DARPA Challenge (solved).

http://archive.darpa.mil/shredderchallenge/

And, come to think of it, a variation was in the novel Rainbows End by Vernor Vinge.


There's an excellent chapter about it in http://www.glassner.com/portfolio/morphs-mallards-and-montag...


> Underlying our approach to this subject is our conviction that "computer science" is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology ­ the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects. Mathematics provides a framework for dealing precisely with notions of "what is." Computation provides a framework for dealing precisely with notions of "how to."

http://web.media.mit.edu/~minsky/papers/Why%20programming%20...

https://mitpress.mit.edu/sicp/front/node3.html

From SICP preface, which is the quote that actually matters, this is inspired by Minsky's quote. I hate that the quote that went public were all the other less inspired quotes from SICP.


Glad I was able to get to meet Marvin back in 2014. On cognitive neuroscience he was pessimistic, he likened it to telling a chemist to try and discern what a computer was doing by looking at the machine without the monitor. Really enjoyed that analogy.


He was truly a brilliant and humble man, who wrote so much influential and interesting stuff! Here's one of my favorite papers by Marvin Minsky:

Jokes and their Relation to the Cognitive Unconscious

Marvin Minsky, MIT

Abstract: Freud's theory of jokes explains how they overcome the mental "censors" that make it hard for us to think "forbidden" thoughts. But his theory did not work so well for humorous nonsense as for other comical subjects. In this essay I argue that the different forms of humor can be seen as much more similar, once we recognize the importance of knowledge about knowledge and, particularly, aspects of thinking concerned with recognizing and suppressing bugs -- ineffective or destructive thought processes. When seen in this light, much humor that at first seems pointless, or mysterious, becomes more understandable.

http://web.media.mit.edu/~minsky/papers/jokes.cognitive.txt


Reading the comments here, I would have loved to meet the guy. I'm in my 30s and I've met and spoken to precious few of "the greats" of our generation. I feel like there are two phenomena going on:

1) We don't always know who is one of "the greats" of today's generation, until much later.

2) Today, content generation is so democratized (eg wikipedia, soundcloud) that there are less individual superstars like Pushkin, etc.

3) Even intellectuals who were once highly regarded would today have tons of comments on their blog nitpicking and debating every detail of what they said, with many of the comments being low quality. See eg Sam Harris vs Noam Chomsky debates.

It seems like a world in which it's harder to be great like Alan Turing or Claude Shannon or Richard Feynman or Marvin Minsky. At the same time, maybe there are more great people than ever, and we just don't always know who they are until much later.


One of my earliest exposures to CS was his Computation: Finite and Infinite Machines.

"Communication with Alien Intelligence" is another favorite of mine. The idea of enumerating all possible Turing Machines and looking for ones that do something meaningful is brilliant.


My father had a book by Minsky which I swiped from his bookshelf and read cover to cover. I can't for the life of me remember the title but looking over his bibliography I think it was this one. This book, if indeed it was this one, had a big impact on my younger self, though I'm ashamed of two things now: 1) I can't remember exactly what it was about it that affected me so, and 2) I don't know where the book is! I do remember the chapters about McCulloch-Pitts artificial neurons and Turing machines and the halting problem. I'd dearly love to find a copy of that book to see if my older self recognises what my younger self saw in it.


You can see the ToC here: https://dl.acm.org/citation.cfm?id=1095587

Chapter 3 talks about McCulloch-Pitts.

I remember being impressed with chapter 14 ("very simple bases for computability") as a kid. Finding UTMs with minimal number of states etc. are great riddles. I also fondly remember the discussion of the halting problem and related problems ("does program P output X") in chapter 8. This was my first introduction to this procedure and Minsky made the idea of reducing one problem to another totally straightforward. Many years later I realized that not a few CS students find these ideas confusing.


Neat-o. My searching didn't turn up that link for some reason. Thanking you kindly.


It is too bad that his book Computation: Finite and Infinite Machines is out of print.


Very true, but FWIW, it's not too hard to find a copy, whether it's a used dead-tree copy (18 or more available on Amazon.com right now), or a bootleg PDF. The PDF version is on the popular e-book sharing sites.


Is the set of all turing machines finite?


No, but they can be put in a 1-1 mapping with the integers, making them countably infinite.


No, but it only needs to be countable, by my understanding.


I would go to Marvin's lectures - they were in the evening, like 7PM or so. This was early 2000's, and I was a grad student at MIT.

I went to Patrick Winston, and asked him if it was worth going to Marvin's lectures given that I keep falling asleep. He said - of course, we all know you are overworked, but marvin may say something that will change your life.


"You don't understand anything until you learn it more than one way." ... RIP ... The Society of Mind one of the best books.


After reading that book my imagination came up with a cartoon I doodled in my notebook. It was someone speaking at Minsky's funeral and saying "I know he thought my mind was just a bunch of simple machines like Builder, but he was still a pretty good guy."

Not that I knew anything about him... it was just an impression.


I remember reading The Society of Mind in high school and being amazed by the ideas after reading his novel The Turing Option. I haven't thought about either in years. Looking back, maybe it's no surprise that I work in machine learning now.


"If you like somebody's work just go and see them. However, don't ask for their autograph. A lot of people came and asked me for my autograph and it's creepy." -- Marvin Minsky (https://youtu.be/qJZ_1a-t_sA?t=1543)


Very sad and surreal, I happened to just be reading his Wikipedia page yesterday. I hope the cause of death does not prevent his cryopreservation. Say what you will about cryonics, but it definitely gives you a better chance of living again than internment or cremation. I hope the world will see his genius again.


K-Lines (Knowledge-lines). I still meditate on this idea, and Minsky's paper "K-lines: A Theory of Memory".

https://en.wikipedia.org/wiki/K-line_(artificial_intelligenc...


Minsky used to give talks at the annual Boskone Science Fiction Convention. I heard several and enjoyed them greatly, he was an entertaining speaker.


He also spoke at the World Science Fiction Convention in Boston in 1989. The only time I got to see him speak in person.


When I first came into contact with AI at the University, I really enjoyed reading his Essays. As A Undergraduate I wasn't really ready to read his papers, but publications like "Why People Think Computers Can't" and "Matter, Mind and Models" (wich where recommendations of my prof.) really got me interested in this field.

Rest in peace, Mr. Minsky.


He had a wonderful way of expressing ideas. This always resonated with me: "Minds are what brains do"


Danny Hillis introduced Marvin at the MIT Media Lab's 30th anniversary in October 2015, where they both participated in a remarkable panel discussion.

Though not as strong and fast-talking as he once was, Marvin's humor and wisdom shine through.

Here's a link to the video: http://www.media.mit.edu/video/view/ml30-2015-10-30-01

Danny begins his introduction at about 41:29.



Here's a video image from the POV of a robotic Dakin Bear of Marvin Minsky's son, Henry Minsky, who had a look of trepidation at the idea of sacrificing his Dakin Bear to one of his dad's robotics experiments.

http://imgur.com/gcFVzpk


thankfully MIT open courseware recorded one of his classes https://www.youtube.com/watch?v=-pb3z2w9gDg the society of mind class 6.868


@seldo (CTO of npmjs)

> True story: in my final essay of my AI module at uni, I disagreed with everything Minsky believed, so my professor, a fan, flunked me.


So sad. I wanted him to survive until a true AI breakthrough happens, which seems so close (granted for many decades now, but that's why).


If you look at what the cars are doing, I'm not an AI guy, but that deep learning that the cars are doing seems like something he might like. In my book that neural network stuff is AI.


Machine learning is not Artificial Intelligence.

Machine Learning is applied science. Artificial Intelligence is science.


This is sad beyond words. He was an admirable genius and a very candid person. May he rest in peace.


I'm trying to figure out who will play him in the Hollywood movie about him.


Admin, can we get a black bar for this? Marvin Minsky is widely referred to as the "Founding Father of AI."


I can't imagine who deserves a black bar if Minsky doesn't, unless a grey bar would be more appropriate (Minsky was signed up for cryonics with Alcor, so he's not dead dead.)


It seems as though the cause of death in his case might present something of an obstacle to whatever is supposed to happen after the cryonics step.


When he's uploaded, who owns the copyright of this mind? Should it be considered a derivative work from his education?


Uh, Minsky does?


This debate will be an interesting one. I imagine life insurance companies demanding their money back if the once deceased person is alive again. Also, how would inheritance work if heirs can spawn multiple instances of themselves that may not act as a single collective (and inheriting copyright may fit here).

It'll be fun.


> I can't imagine who deserves a black bar if Minsky doesn't

Have you chance to elaborate why this guy deserved this? Did he build first neural network, lisp machine, ALICE chatbot, break image net. I always considered him as some kind of celebrity from science, while other guys, which names nobody remembers, actually pushed AI movement by doing real things, while working on Google Brain, Watson, cyc, trying to catch spam, terrorists and fraudsters.




Ignorance is not an excuse


Could you elaborate?


You can change the topcolor in the settings to black to get your own personal black bar.

You won't see the top menu links.


You're completely missing the point of the black bar—its a way for the community to mourn the loss of a fellow hacker.


It took me but a moment to realize why the black bar appeared on a refresh. Thanks mods.


Did we even get a black bar when Steve Jobs died?

Seems like the admins have pretty much done away with the black bar thing.



Black bar is an empty gesture. Who does it serve? He died, his ideas didn't.


Well humans mourn their dead, but also there is a more practical reason if you'd like:

When I see a black bar on Hacker News, it serves as an alert to me to scan the news more carefully. Although I knew who McCarthy was, Jobs, etc, I didn't know this person. The black bar alerted me that someone in this community that is considered quite an influence died. Now I'm reading all about the person in various articles. The black bar helped to alert me to that.


Isaac Asimov: "The only people I ever met whose intellects surpassed my own were Carl Sagan and Marvin Minsky."


Minsky's Society of Mind lectures are online, thankfully - https://www.youtube.com/watch?v=-pb3z2w9gDg .

In one of the lectures, Minsky told a story about how when he was at Princeton, Oppenheimer invited him to lunch. When Oppenheimer brought Minsky to the lunch, there were two other people there - Gödel and Einstein. Talk about brainpower, that must have been an interesting table conversation.


They probably just talked about how the human brain would never be capable of understanding some true things about physics.


That's high praise, although I guess he never met von Neumann.


Pretty egotistical to frame it that way of Asimov.


There are a bunch of adjectives I would use to describe Asimov, and egotistical is so far down the list I'd get bored listing them before I got to it. It seems likely to be a realistic and honest self-assessment, on the basis of the breadth, depth, and foresight, of Asimov's written work. If you're Usain Bolt, is it egotistical to say, "I'm probably the fastest runner in the world?" Why is intellect so much different? Sure, there's no precise measure of intelligence...but, someone with great intellectual achievement gets to, at the very least, lay claim to great intellect.

It would be easily dismissed if said by someone much less accomplished. In this case, I just take it as a frank statement of belief.

Regardless, I think it's safe to say Minsky was a great intellect, of a caliber not often seen.


I don't think bolt could say that. He'll be in the very top (for certain kinds of metrics), but to say #1? That presumes you know your rank compared to 8B+ people.

I would have preferred Asimov to say something like "Sagan and minsky are the brightest minds I've encountered in my life." And if you think Asimov is super smart, then you'll respect Sagan and Minsky that much more. And you'll respect Asimov too for not being so egotistical. It doesn't even matter if it's actually true or not.

Minsky, in my experience, demonstrated humility when talking with or listening to him. For that I gain admiration beyond the talent.


It's a quotation, taken out of context, without any additional side channel information. Might be a good idea to not form a strong opinion of a very smart man from this.


Especially given that Asimov said he stopped understanding Calculus when he got stuck on the Quotient Rule (i.e., couldn't be bothered to learn the proof)


Have you read much of what Asimov wrote about himself?


You are not Isaac Asimov. Therefore you cannot know his mind. So maybe he wasn't boasting at all.


Isaac Asimov wasn't any of the people he wasn't talking about either, so maybe he should've kept it shut about their minds, too.

But that wouldn't make good banter.


If that's true, then he needs to work on his language because it's easily interpreted that way. Either that or the GP needs to expand the quote.


Sure, but he was incredibly smart.


Being a writer doesn't say anything about his intellect, you didn't know the man so he could likely be being completely honest.


In the same vein -- John Nash once said that Minsky was the smartest person he ever met.

...He also said that he thought McCarthy was an idiot. :)


God, when I log on to HN in the morning and see that damned black bar I know it's going to hurt.

But Marvin Minsky? My Tuesday wasn't ready for this. He has had such an impact on the field of AI, and even on the social dialogues about it. Not everybody thinks that robots are going to go Skynet on us, and a lot of us that realize that were informed by his work. Whether directly or indirectly, so much of his work has become common knowledge amongst AI enthusiasts and scientists.

I'd be wasting my breath to say that he'll be missed, of course. I wish I could have met him.


We just lost a giant.

Hard to imagine someone more black bar worthy for Hacker News, hope we have one up soon.


I hope not. The black bar has become a meaningless exercise in political posturing.

Edit: allow me to clarify - just having to choose whether or not someone is "black bar worthy" is distasteful. Ian Murdock didn't get a black bar that I can recall, and I don't remember seeing one since then, even though other people relevant to the community have died. Are we to expect someone to change the color of the bar every time a death in the tech community occurs, and how to we judge relevance in that regard?

No, it's an arbitrary gesture that doesn't really honor anything or anyone. It just gives people something to argue about. Why did someone get a black bar and someone else didn't? Why was this person deserving of it, and not that person? It's best to just remove it altogether.


I thought the black bar was done by the people who run the site, and "we" don't judge it at all.

In which case, it's a personal decision to honor someone they care about by placing a subtle notice on ther site, which happens to be popular. It's no different in spirit to the thousands of tribute blog posts being written as I type this.

I find it extremely distasteful to criticize how someone chooses to honor the dead. As long as they're not doing it by shooting guns into the air or hosting a destructive party next to your house or something like that, what do you care?


"We" certainly do judge, even if that judgement has no practical value. People do act as if it's up for debate. Whenever we decide one death is worthy of it and ignore others, or get into petty arguments if it doesn't appear fast enough, or for the right person. An expectation builds which, if not properly catered to, leads to insult and animosity. And given that it's Hacker News, sometimes suspicion and paranoia.

It certainly is within the site owners' right to do whatever they like, but for the community it's becoming a spectacle.

Although obviously, as the bar is up right now, my opinion on the matter isn't going to prevail.


I'd say the solution here is not to end the black bars, but to end the petty disputes by being more civilized in the discussions. Perhaps that's too much to ask, but we should give it a shot.


Thank you.


Why?


Because it very strongly reflects the guiding idea of what is best for HN given that 'petty disputes' are the opposite. They are destructive to a degree that is hard to imagine until you put a figure on it.


Unfortunately to my eyes the comment that dang thanked looked like it was a rather passive aggressive put down. Especially in the context that the OP was not being petty but was raising an important point in a measured way.

EDIT Civility is important but it should not be confused for everyone having the same point of view on a topic. And censorship by "civility" causes people to not join discussions. When I come to HN I want a intelligent but lively discussion. However, some issues have reached such a consensus such that even well thought out opposing views put politely get heavily downvoted.


Think about it this way...

Imagine you were the caretaker of a public establishment like a school or business, and news broke of the passing of an eminent person who was deeply respected and admired by many of the people who frequent that establishment.

So you went and lowered the flag to half-mast, because that is the customary and respectful thing to do. And whilst most people appreciated the gesture and felt comforted by the shared sense of mourning and respect for the deceased person, a small minority erupted into a noisy debate about how appropriate it was to lower the flag, and whether someone else was more worthy of having the flag lowered in their honour, etc.

If you can imagine this scenario in real life, you can understand how dang feels when this kind of argument erupts on a bereavement thread on a site he runs and cares so deeply about cultivating as a pleasant site to visit.

He can't be the one to call people out for being insensitive, but he can at least say "Thank you" to someone who does, and who in doing so, gives him some much-needed reassurance about the level of emotional intelligence around this place.

Discussions about the merits of customs and policies on the site are fair enough, but if we're to be as humane and compassionate online as we would try to be offline, the time and place of the mourning and honouring of a just-deceased person is not the right time and place.


I can see this from both POV...

Just reverse that situation and imagine that you are a member of a public establishment and that when certain people you particularly respect pass that public establishment does not follow its usual customs.


If/when it happens, you'd politely discuss it with the caretaker and others in the establishment (just like what happens here when someone says "hey can we have the black bar?"), and you'd then respect the consensus outcome. And if the caretaker and the other members rebuked such a request that was important to you, it would be fair enough to express your feelings then and there, and discuss it to the point of resolution.

That isn't what's happened here.

Seriously, civil behaviour around a bereavement is just not that hard.

Respectfully, I won't be commenting further on this thread.


On reflection, I think mikeash is correct. The civil thing to do on my part would have been to not bring the subject up, as there's a time and place for everything, and this thread was probably neither. I accidentally created exactly the sort of noise I wanted to avoid. It would be better to expect more of people.


Thank you as well.


It's not a passive aggressive put down, it's just regular aggression with sarcasm. I straight up called it "extremely distasteful" in the prior comment.

You don't have to have the same point of view on this topic, just don't take a discussion of a recently dead person being officially mourned by the site owner as an opportunity to criticize that.

If this "censorship" (which is far from it) causes people who are going to argue over mourning to not join the discussion, then mission accomplished.


The user dang is the moderator. A charitable reading of his comment would be that dang was relieved that someone else was responding to the subthread in a way consistent with his sense of how people should act on Hacker News.

Meta-discussion of the black bar is intellectually uninteresting at best. Off topic in the middle. And disrespectful of people's grief at worst. The least of these is reason enough to downvote.


I know very well that dang is a moderator. I had expected that a moderator would actually consider the site's own guidelines when someone displays what they describe in their own words as "regular aggression with sarcasm"+.

+ https://news.ycombinator.com/item?id=10973857


My comment was posted before mikeash described his comment that way, so unless you think we should be mind-readers, that's a hindsight fallacy. I never dreamt he was being sarcastic. In fact I still can't read his comment that way—to me it remains a striking expression of the right values of this site.

If I squint, I guess I can see that use of the word "civilized" as a bit aggressive, but really I think Mike was being hyperbolic for effect in his later description.


I don't read it that way.


Out of curiosity, does anyone have a list of people who HN has enabled the black bar for?


I went ahead and compiled a list of "black bar" memorials from what I could remember and a little bit of searching.

(It's probably not comprehensive, but it's clearly composed of computing luminaries and members of the YC community)

Dan Haubert (1984 - 2009)

Robert Morris (1932 - 2011)

John McCarthy (1927 - 2011)

Dennis Ritchie (1941 - 2011)

Steve Jobs (1955 - 2011)

Doug Engelbart (1925 - 2013)

Aaron Swartz (1986 - 2013)

Gene Amdahl (1922 - 2015)

Marvin Minsky (1927 - 2016)


Just want to speak up and say that your comment is actually very sensible, and it's terrifying to see it's downvoted.


Can we get a black bar for this one?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: