Hacker News new | past | comments | ask | show | jobs | submit login

This seems to be common problem. Watzlawick describes an experiment where two people (A and B) are given the task to classify tissue samples as "healthy" or "sick".

There is an initial training session where a light tells them whether their answer was correct, with one little caveat: only A gets real feedback. B's light just duplicates A's, so the answers that B receives are essentially random (I think they get different picture as well).

Since the task is not very difficult (on purpose), the As learn the task in 80% of the cases. The Bs have a much more difficult task, they are required to try and find order in a random world. They form very complex theories to account for this.

However, that's not the experiment quite yet. The real experiment is that As and Bs are then put together to discuss their results. What happens then is stunning: instead of rejecting the B's theories as unnecessarily complex, the As are usually so impressed with the subtle complexity and detailed brilliance of the B's theories, that they change their mind and accept the B theories!

When asked who will improve in the next round, all the Bs and most of the As pick the Bs. And they are right, because the As will have accepted at least some of the Bs ideas and thus perform more poorly.

Reference: http://omg.pytalhost.net/dls/ebk_wwidw.pdf (German)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: