Readings books is awesome because you are getting the condensed knowledge that someone have spend maybe decades of their lives to compile. When I listen to a podcast the same applies to a lesser extend. The problem is Lex does not ask interesting questions. He dwells into philosophical and common questions i.e "What is the meaning of life", "is math invented or discovered". This questions are interesting but are not in the guest field of expertise which in turn causes them to give a generic response. This devalues the quality of the podcast.
The other thing I like is how Lex is willing to disagree with the interviewee. It leads to much more interesting discussions when someone has to explain their reasoning rather than just stating it outright. I think it helps that Lex seems to always ask the same questions I was thinking of!
I understand it would be difficult to get ones head around the matter produced by each of these giants to ask more specific questions. Would it be worth asking others in the field about the key ideas that require explaining and which topics are controversial?
My advice would be to ask shorter questions, sometimes Lex you try to explain what you mean. I would just drop the question and let the other person start talking instead of you trying to fill the silence.
Also I do really enjoy the questions about meaning of life (maybe more discussions on free will would be great too).
In any case, hats off to you Lex for interviewing some of the most interesting people in the field (and those who are not exactly in the field too).
Also I love how you show up with a suit.
Here are things I didn't realize is the role of the interviewer (my role) but I now know they are:
1. Push towards depth, because not all people go there naturally themselves.
2. Ask for clarifications if I don't understand something. This can make me sound stupid, but it's a worthy sacrifice. I will always sacrifice ego for the chance to understand something basic or hopefully fundamental. In fact, I play dumb sometimes just to force explanation of basics on which the technically deep ideas are built.
3. Disagree respectfully (at times playing devil's advocate) to give a chance to the interviewee to argue their point.
4. Speak whatever question or point I have clearly, concisely, quickly, and then shut up and listen. My role is to give the other person a break and to throw up ideas that spark their passion. I really struggle with this (especially the concise, clear part).
Thank you again for the kind words. I'll keep improving!
One can tell that Lex does his research and also asks specific questions pertaining to the interviewee’s expertise. I think he strikes a good balance in many/most episodes.
Keep up the good work lex. The quantity of content you put out at the level of quality you do is very impressive.
Given the consistency of these experimental outcomes, may help to explain his authoritative tone. After all, he won a Nobel prize in economics which is completely outside his domain expertise.
If that is not enough to entice you to give it another go, consider the book, The Undoing Project, by Michael Lewis. Lewis, in his highly researched and using detailed anecdotes describes how Kahnemans and Tversky collaboration initially began as a collaboration around understanding the decision making process in circumstances where there is a high degree of uncertainty and acquiring additional information is not feasible. The Khanamen/Tversky collaboration is described as a deep collaboration where neither man claims credit for their respective contributions. Only both minds together could have produced their research findings.
Lewis also describes how many reviewers of his book, Moneyball, expressed the many parallels between Kahnemans/Tversky's research and themes covered in Moneyball. Lewis was not familiar with this research when he wrote Moneyball. It was not until others pointed out these parallels that Lewis decided he should engage with Kahneman to write The Undoing Project.
You might prefer his scholarly writings over those intended for a general audience:
I read it on advice of a good pal, and it sure seemed like bullshit to me.
(I could be wrong and would be happy to be proven so. Just not a fan of the all-or-nothing attitude people apply toward this book when that doesn't seem warranted.)
Besides, why go through all that trouble if there are heaps of good psychology books out there that aren't plagued with errors as this one? Since our time is so limited, I think it is a good heuristic to avoid any non-fiction book that it is known to contain errors.
However, I don't think this justifies dismissing the entire book. "I'm not sure which studies weren't reproducible and I don't feel like looking them up," is a very different statement than, "This whole book is bullshit." There's really no reason to make that latter overstatement.
That's kind of the same problem that a psychology researcher faces; some of their data is going to be wrong.
The question winds-up being how "robust" your claim is, can you survive having some points being wrong? For Thinking Slow And Fast, the robustness of the claims is kind of a mixed bag imo.
All of psychology has suffered in the replication crisis, but my understanding is that Kahneman & Tversky's stuff is better than most. Their work was mostly solid and in a different era. The real bullshit began in the era of celebrities doing TED talks.
Edit it would be better for me to distinguish Kahneman & Tversky's own work from the work of others described in the book. Eg there is stuff in the book on priming which is definitely TED-era and doesn't replicate.
If you take with the first implication, it's plausible, useful as one more datum. But it seems like replication problems make the hard distinction approach more problematic.
I've yet to read the book, but I've listened to him on several podcasts, and I've never gotten the sense that he wouldn't see it as a continuum, didn't he even say something early in the interview that the "1 & 2" is more of a metaphor? (his answer to that question starts at 9:00). System 1 is trainable, for example, and I can't imagine he'd suggest that isn't highly dimensional.
Most of the book is still considered correct.
Plus, he readily admitted to the faulty parts and made a very strong request to the affected research teams to clean up their act and pretty much re-do all experiments multiple times, by multiple labs, with external oversight.
Can you give an example of this? I haven't seen anything he said in here that fundamentally needs to rest on a theory that could be the outcome of some study, but maybe I'm interpreting somehow differently.
Bear in mind: As has been repeatedly pointed out, it is only the priming-related chapter (called 'The Associative Machine' in the book) that put "too much faith in under-powered studies". Not the entire book!
The book is a synthesis of forty years of Kahneman's research and his collaboration with his late colleague, Tversky. A wide range of topics are covered; and it still absolutely merits reading. Patiently dive into the book and make up your mind.
(And yes, a v2 of this book definitely is worth it, given the "authority" of the Nobel Memorial Prize.)
> it is administered and referred to along with the Nobel Prizes by the Nobel Foundation
So, for all intents and purposes it kinda is "the Nobel Prize in Economics". If they don't give a fuck about legitimacy of it, I don't see why they would care for the rest of "Nobel Prizes".
* Summary of a some criticism by researchers trying to replicate the original studies: https://www.buzzfeed.com/tomchivers/what-is-your-mindset?utm....
However, this BuzzFeed article has itself been critiqued for strawmanning "mindset theory" and not highlighting some of the meta-analyses that have given evidence to the theory of "growth mindset". Source: https://www.thecut.com/2017/01/mindset-theory-a-popular-idea...
* It's worth noting that Carol Dweck herself has commented on how she believes her research is being inaccurately applied in schools: https://www.edweek.org/ew/articles/2015/09/23/carol-dweck-re...
* Here are two (pay-walled) meta-analyses done on "growth mindset" research:
As a footnote, I will say two quick things: first, I've found the theory of "growth vs. fixed mindset" in helping me challenge myself in areas I may otherwise not have. Second, on the other hand I have worked full-time in a high school as a CS teacher and we discussed growth mindset quite often in this school. Many students became "immune" to the idea and rolled their eyes at it to the point of making it a meme around the school.
"… as expected, average effects were small because many students are already doing well, do not have motivational issues, or are not in environments that encourage or support growth-mindset behaviors. When we take account of such factors, more noteworthy effects emerge. The improvements in the gateway outcome of 9th grade GPA were concentrated among adolescents who are at significant risk for compromised well-being and economic welfare: those with lower levels of prior achievement attending relatively lower achieving schools. The finding that an intervention can redirect this adolescent outcome in this sub-group, in under an hour, without training of teachers, and at scale (i.e. in a random sample of nation’s schools), represents a significant advance."
Personal experience lines up with the result that lower-achieving students may benefit more from the "growth mindset" idea than others. For instance, I did notice that messaging I gave to students with a "fixed mindset" towards studying CS/math seemed to improve motivation, work ethic and interest over the course of a semester.
As Oscar Wilde said: "We are all in the gutter, but some of us are looking at the stars."
I love the gutter and I love the stars!
On the technical depth side criticism you're getting, I would say the one with Ian Goodfellow had optimal balance.
And don't take destructive criticism about bad attitude or tone etc by some commentors
. For someone introverted it's already draining to put yourself in front of a camera ...
I am intrigued about the discussion on explainable AI. How do you feel about the quality of the current XAI research? What do you think are the most important directions for that field? And what do you see it looking like in 3-5 years?
If someone is willing to dig deeeper into technical stuffs of any of your guests, internet is full of resources but there's no way to find the personal insights one could get from the podcast.
I do really hope to see Robert Sapolsky and Douglas Hofstadter sooner or later.
Note that both system 1 and 2 are trainable and able to perform complex tasks (he takes the example of a chess player for whom only strong moves come to mind, which is system 1 playing chess in that case; and system 2 is more of a "validation" for system 1's output in such a case).
He doesn't go into it but I think you could make the reverse argument, that system 2 is "checked" by system 1, when we "feel" that something, even though "correct", is just "not right" for instance. That kind of judgement over a thought or idea is nearly instant, it's very system-1 like, and keeps popping up in our thinking as we "judge" said thoughts and do some triage as we go along.
As for AI and system 2, the problem is that system 2 is conscious, deliberate, and aware of causality and meaning — and the last two are really hard problems for now. He mentions earlier ML models pre-DL (when they tried to do it the hard, symbolic way iirc?), and indeed the question of whether current architecture can or cannot generalize up to system 2 is opened. Yann Lecun apparently thinks it can (just that we don't know if it's right around the corner or very, very far away), Lex (and most AI experts I heard) think not, that there's a fundamentally 'other' kind of architecture(s) required.
Or what tasks are in the domain of system 2?
I believe it's totally "system 1", and actually by design.
First of all it's not a new kind of NN, it's more about applying a given problem to another technique, name consider math syntax as just another kind of language:
> represent complex mathematical expressions as a kind of language and then treating solutions as a translation problem
(which might seem obvious but I guess it took that much refinement to yield actual results)
Now, take these quotes, emphasis mine:
> Humans who are particularly good at symbolic math often rely on a kind of intuition. They have a sense of what the solution to a given problem should look like
> By training a model to detect patterns in symbolic equations, we believed that a neural network could piece together the clues that led to their solutions, roughly similar to a human’s intuition-based approach to complex problems.
Intuition, intuition-based approach: this is exactly what system 1 represents.
Also note the results:
> Our model demonstrated 99.7 percent accuracy when solving integration problems, and 94 percent and 81.2 percent accuracy, respectively, for first- and second-order differential equations.
One major difference between systems 1 and 2 is that 1 is fuzzy, intuitive, it's not always exact, it's very analog; whereas system 2 is able to be correct, exact, precise — and like the researchers themselves validating the 5,000 answers, you'd expect a "well trained" math intelligence to solve 100% (or close enough) of these problems. It may take time but give yourself 20 years and you'll get there no doubt; whereas this narrow language-AI with one hundred million examples still makes mistakes.
Very system 1 indeed.
Model free (system 1) is fast, stimulus-response mappings. In ML, these mappings are called a policy. Most of reinforcement learning, including deep-learning-based RL, is model free.
Model-based (system 2) is less widely used. In this case, an agent or system is trying to learn the dynamics of a system and use those to project or forecast into the future. This is really helpful for e.g. learning a control system. Being able to use a model of the world to make accurate predictions lets you plan.
In an era where the quality of journalism and media has moved from objective analysis toward editorialized hysteria on all topics, the approach you take is especially refreshing.
Of course, I'm sure you don't feel untouchable, but maybe there is a certain cold comfort in knowing that inconsiderate criticism is a hallmark of becoming an institution. Keep up the good work, man.
I'd much rather hear LeCun, Goodfellow, Schmidhuber and Bengio talk about what they're currently working on and where they think the field will go in the next year or two instead of their wild guesses about AGI. I guess the futurism crowd is much larger audience though.
Thanks for the kind words. I work hard on this thing, and hopefully will improve with time.
It's a difficult tradeoff I know whether for a podcast or even even for a lecture topic. I went to a couple of your sessions this IAP and I'll admit I found one fantastic (in part because it was directly relevant to some things I'm working on/talking about around AI privacy) and one not so much because I'm more focused on practical applications than the math underpinnings.
I don’t know, i really like these forays into the philosophical.
This clearly is a podcast for quite technical people. His guests most of the time (like, almost always, except the cases he invites somebody because he was on a JRE) are people notable for their technical contributions. They discuss some very technical stuff the guest is known for and is literally the best possible person to teach us about. Instead he skips technical questions almost completely and opts for "what's the meaning of life"?! Come on!
When the discussion happens to be pretty technical (mostly, because the guest himself is more of a no-nonsense type of guy) it sometimes feels like some basic background necessary to understand the further explanation was skipped. I assume that it's my fault, since this is supposed to be common knowledge and host doesn't want to interrupt the guests to clarify such nonsense. And later on I understand, that Lex didn't understand that part either, but didn't make any attempts to clarify. Isn't that the point of an interview?..
I wouldn't want to offend him, but often I think "oh, such a waste!" listening to his podcasts. So much stuff is left unanswered.
So, yeah. Nice guy, but no so good interviewer. And way too romantic.
On the technical depth point, I agree. A lot of folks tell me they love the "meaning of life" questions. I love both the technical and the philosophical. My hope is to more and more try to go deep technically with the ML, CS, math, physics folks on the topic of their expertise, and find productive points of passionate disagreement or insight. This isn't easy, and I fail often, but I'm working hard to improve.
The problem with being way too philosophical is that it restricts discussion to sharing opinions, which is totally ok, but the problem with opinions is that every single person on the planet has an opinion, but it's way more rare thing to have some knowledge. So, it may be really interesting to hear somebody's opinion about something, but as far as learning goes, I don't really gain anything from them: the more abstract and complicated the question, the less difference from asking a random person on the street. But when you have somebody in front of you, who has some knowledge that your audience (or you) doesn't have (be it technical, or an experience of making a wildly successful infotainment youtube channel, or anything else), you can learn so much more from every single conversation.
Joe Rogan got so popular because he lets his guests drive the conversation and allows them to talk about themselves and whatever it is that they're interested in. He seems to really follow Dale Carnegie's advice from How to Win Friends and Influence People (https://en.wikipedia.org/wiki/How_to_Win_Friends_and_Influen...).
People watch JRE, because it's fun and he has a lot of very influential people as guests nowadays. It doesn't mean that more interviewers should be like him, God forbid...
I think you come across very genuine, not arrogant at all. And it shows that you do your best to reflect, don't worry about that. However, while the philosopher (and sensationalist) in me want to disagree: yes, be more technical! You have a unique audience and guests to cater to, I can't imagine the pressure.
That being said, it's your podcast, and it wouldn't be where it's at if it wasn't for your character and honset curiosity, and its appreciated for exactly what it is. Cheers!
Interesting. I don't see much in the way of arrogance when I watch these. What you call "deadpan delivery and lack of personality" I attribute to his being Russian, and just having a somewhat stereotypically Russian style of delivery.
I have a close friend who is Russian, and I've found that, at least in his case, there definitely is a sense of humor there, despite the stereotypes. I just find his Russian humor to be very subtle and even after all these years I don't always pick up on when he's joking and when he isn't. From watching your interviews, I get a very similar vibe. I suspect you have a fine sense of humor, but that not all Americans will appreciate or recognize it easily.
I don't know how you could possibly listen to his interview with George Hotz (one of my favourites) and say that he has no personality.
I think you have assumed the worst, that his entire personality is an act, and this has coloured your view of everything else. Once you realize that he's being sincere, I think you will find it much more enjoyable.
For example almost every guest some variation of "Do you think one day we'll have superhuman AI?" Then the guest is struggling to come up with some platitude, because there is nothing else to say. It's a waste of time.
He is self-aware too, he sometimes apologizes for asking, yet he keeps doing it anyway.
We're as close to AGI today as we were 10-30 years ago, as in really far away. There's nothing that any of his guests can add to that debate that hasn't been covered in countless scifi novels. I don't care what Vsauce or Bjarne Stroustrup have to say about it.
There's plenty material available from/about Lex's guests' work; why not ask them to speculate about superhuman AI or whatever?