Hacker News new | past | comments | ask | show | jobs | submit login
Negative Expertise (1994) (media.mit.edu)
72 points by the-mitr on May 3, 2017 | hide | past | favorite | 9 comments



(1994) Minsky essay.

> Could it be that our accumulations of counterexamples are larger and more powerful than our collections of instances and examples? Could it be that we learn more from negative rather than from positive reinforcement?

It seems like major scientific breakthroughs involve both positive (oh, so THAT'S how to do it) and negative (well, we tried a, b, and c, they all failed or had bad consequences, so let's not do that again) knowledge.

I think the essay sets up a false dichotomy, or a false question, of "which is more important?"

It might be that everyday life depends more on negative expertise than positive expertise. You don't have to be top of your field or know things that nobody else knows, but if you make a major mistake you might get hurt, socially outcast, etc. But humanity's progress seems like it depends on both positive and negative expertise, inextricably intertwined.


> I think the essay sets up a false dichotomy, or a false question, of "which is more important?"

He sets the ground right up front: that expert systems (the state of the art at the time, though by then in decline) primarily consist(ed) of positive questions, and that that was a mistake. Essentially the dichotomy had been implicitly resolved in favor of the "positive questions are more important" side, and Marvin was arguing against that.


It really depends on the dimensionality of the solution space. With very high dimensionality, going "away" from a bad direction basically does not help.


Another possibility to "both positive and negative are important" is that in some sorts of situations positive is more important and in some sorts negative is more important.

Not saying this is the case, just mentioning that as a possibility.


> Other colleagues maintain that we should be able to construct large, uniform neural networks that can learn to do all that minds might need. I do not see much hope of this, because of fear that any very large such network would be prone to accumulate too many interconnections and become paralyzed by oscillations or instabilities. How could we stabilize such systems ? My answer is that one might have to provide a variety of alternative sub-systems, decoupled enough that if each part should fail from time to time, the rest could continue to function so that not all the system will all fail at once. This means that those parts must be suitably insulated from one another.

Looks like Generative Adversorial Networks ?


I think he's referring to the Society of Mind.


I happen to have a copy of The Emotion Machine on my desk right this instant, so I looked in it and this topic is discussed in that book (his last).


> But a 'negative' way to seem competent is, simply, never to make mistakes. How much of what we learn to do -- and learn to think -- is of this other variety?

Is it an interesting question, but I feel like the words "competent" and "expertise" set up misleading expectations. Most people are experts at walking and eating, but not nuclear physics or the history of Asian music.

I guess almost all of what we do at any given moment - look, talk, eat, sleep, walk, read, etc. - is this type. We abstract everything and don't consciously process most of what we see & do. Lots of people have speculated that if we were consciously thinking about every sensory input and every action we took, living would be unbearable.

But for learning new things, for establishing what it means to be an expert in a new field by being better than others, by doing research, by breaking new ground -- it has to be asked, what good does learned knowledge and not making mistakes serve? What is ever learned if someone doesn't make a mistake? Science doesn't move without mistakes. Nor does evolution. With competence, there's no change.

Maybe I'm just coming to the conclusion that being competent and becoming an expert are two completely different things?

> Presumably, experts have more effective censors than the rest of us

I'm honestly not sure why this is presumed. A lot of us would recognize that someone who's made more mistakes and learned from them is more expert than someone who's tried nothing new and has only relied on the learned "book knowledge" of mistakes of others.

> It annoys me how frequently people suggest that the 'secret' of making creative machines might lie in providing some sort of random or chaotic kind of search generator. Nonsense!

I couldn't agree more with this, it hits home having spent time practicing digital art. So many people talk about adding a random number generator as if that's going to find new things and add some magical discovery to the process. It doesn't, and there are almost always better alternatives.


One challenge with encoding negatives is that there are infinitely many of them. When confronted with a lion, positive things to do would all involve defense or escape. Things not to do include taking a nap, having lunch, provoking the lion, pulling out your phone to read HN... We can sift through those and say "provoking the lion" will likely be very bad and avoid that. All the others might get lumped into "do nothing" because they don't really address the lion. So I suppose I've answered myself - the negatives must be restricted by potential relevance to the situation in order to limit their number.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: