Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI CEO Sam Altman on Lex Fridman Podcast [video] (youtube.com)
98 points by ftxbro on March 25, 2023 | hide | past | favorite | 94 comments



Why does Lex have so much “now impress me” energy in these podcasts?

It’s not even a personality quirk because he seems to modulate it throughout the podcast. Surely he can get rid of it altogether.

Rise of Lex is both baffling and utterly expected to me as a AI insider. He used his MIT badge to the fullest extent. Having seen his “lectures” I wouldn’t bet on him cracking an interview for a beginner AI researcher role anywhere


He seemingly doesn't have...any depth in his public persona. Very bizarre guy. Feels more obfuscatory than actually being that lacking.

His comment sections scream inorganic to me. Always 99% blanket sunshine out the ass praise for how mind blowingly incredible every episode is, never with any specific details.


14:17: "I suspect too much of the processing power is going into using the model as a database instead of using the model as a reasoning engine"

https://youtu.be/L_Guz73e6fw?t=857


Can you (or someone) expand more on what he means by this?


When you ask GPT to add it probably tries to remember a similar instance in the training set rather than performing the calculation on the spot


asking and retrieving info instead of iterating through analysis steps with inputs toward some goal


Sam indicated, in context of discussing a "democratic" constitution for AI use, that OpenAI can -not- be a "mere advisor" in the process.

Sam also (unwittingly) underscored the alarm of the NYTimes columnist [for me] that wondered at the open admission (which Sam repeated here) that "there is a chance" that AI will kill off humanity, but "the iterative approach" of OpenAI could help with that.


The plastic surgery has a heavy uncanny valley vibe.


Just want to remind people, Lex is a human being. So is Sam, keep it constructive ?


Interesting comment about consciousness being the fundamental substrate of the world and we are in a dream or simulation. That is exactly the Nondualist position. Did he say Bramhan?

https://youtu.be/L_Guz73e6fw?start=1:08:22


wish there was a follow up discussion about those ideas instead of a context change. And yes, Altman has tweeted something similar before:

> haven’t seen this as a twitter thread, so: what true thing do you believe that few people agree with you on?

> absolute equivalence of brahman and atman

https://twitter.com/sama/status/1607482407770554373


I had the exact same question. YT subtitles say he did.


The victimhood claim didn't take long. Sure, that's Lex's editing, but..

Anyway, then read:

https://news.ycombinator.com/item?id=34471720

> Ask HN: How did Sam Altman fail upward so well?

> 245 points by VirusNewbie 62 days ago - 119 comments


It's remarkable how unremarkable that interview was.

I think a major problem I had is that there was very little new substance with lots of nonsense. One of the biggest criticisms about transitioning from non-profit to for-profit can basically be summed up as they "needed to raise capital but don't worry, we're still ethical". Or the ongoing discussions about safety and bias that have been discussed to death at a very superficial level. Or a question about how much parameter size matters is basically "we just want to do what's best". How about some insightful answers from someone with first-hand knowledge instead of answering like a politician running for office?

Lots of weird circle-jerking as well about how he's one of the few people that can maybe make AGI and how powerful that makes him. I feel like the interview is completely detached from the real world where it basically comes down to advancements in technology and data. We saw how fast Stable Diffusion covered ground. Also really feels like it's too sensational to be talking so much about AGI - idk why Altman is acting so mysterious about 'democratizing the process'. Make the models public. Make the research public.

And of course there's the typical Fridman nonsense. There's a whole section on 'Elon Musk'. Not even really his history with the company, but about twitter drama. It's distracting how much Fridman worships Elon - like he's made multiple offers to elon to run twitter only to get ignored.


Welcome to all of human discourse. Has this been any different from any point in human history? If you’re a scientist, 80% of the time you spend in conferences or reading papers would be digesting stuff you already know, with a few new nuggets of information or communication. You have to acknowledge every human being has an agenda and they’re gonna act in their self interest. Why would Sam give more details than he wants to in this or any meeting given the primarily economical but ostensibly ethical reasons not to reveal it? I still got many many useful pieces of information from this, that was worth the 2 hours of time and enduring all the drivel that goes in between by nature of this being a lex podcast.


Why didn’t OpenAI just shut down if being a non profit was unviable? Of course great things have happened because it survived. But survival instinct is as much a part of that transition as anything else


Thank you for saving me a few hours. Really appreciate it.

I'm starting to listen to fewer and fewer of his podcasts. For me the decline has been has been obvious with some of his recent guests. I thought the Aella one would be very interesting, but as another commenter mentioned, he had an "impress me" vibe the entire time and couldn't even connect a little bit with the guest. Then the Sam Harris one exposed his weaknesses very obviously. If you want to see him being defensive and not say any of substance, that's a good one to listen to. Sam makes a lot of great points and he just goes on and on about the "power of love" and how we are all "human beings" trying to be "understood". After a while it gets irritating because it's like hearing a broken record player. Him unable to either provide a coherent argument to why he aired the Kanye episode or admit it was a mistake was very telling. That episode has more Elon worshipping as well. He tried to get Sam to reconnect with Elon and become friends again. lol.

His postcasts are on average above Joe's quality (don't listen to anymore, but used to years ago), but I think it's primarily because of whom he selects as his guests. At least Joe was significantly more entertaining. Might try going back to that.


Joe is just more curious. He puts in more thought to his questions.

Lex's questions feel like a ChatGPT parody of himself. He doesn't push his guests, seek to be entertaining. He took the 'just let them talk' idea too literally I think. Its just a PR speak 4hr+ podcast for whoever his guests are.

I actually couldn't stand the Harris episode. It felt so meandering and had no real direction. No pushback from either side, almost a talking past each other but in a meta way. It felt like I was in a conversation where I talked way too much and should have listened more. The episode could have been 1hr.


Every time Sam asked Lex a question I was excited for some discussion and then disappointed when Lex started talking. He either goes off on random tangent or doesn't get to the heart of the question / debate. Just very shallow and vapid, which is disappointing given the guests and Lex supposedly being of a technical background. I'd rather Joe Rogan interview Lex's guest and that says a lot.

Example: https://youtu.be/L_Guz73e6fw?t=3800

He does have interesting guests though that I don't see on other podcasts so maybe he's seen as safe / controllable and is PR approved.


Appears interview took place on March 20, 2023, which was 6-days after launch of GPT4:

https://youtu.be/L_Guz73e6fw?t=1797


I find Lex Fridman to be the most boring podcaster on the planet. And I have heard plenty of drones in the past, but he takes it to the next level.


He still has a lot of great guests, and they have the chance to talk freely, sometimes for multiple hours. That's great. Where else do you have that? Where can I see a video of Sam Altman or whoever else talking for >2h on a wide range of interesting topics?

I think esp in the AI field, he has enough knowledge to ask the right questions, which allow the guests to really go into depth into their thought process.


How I've built this has an interview with Sam Altman that's 1+h long


See, this is what irks me. Lex's (what a name! We're surrounded by supervillains!) popularity feels very fake to me. This guy has the personality of a wet napkin, he doesn't seem smart enough to interview half the people he does, boasts about how he's a professor or something (he doesn't even live near the uni?) and it seems like rich people have settled upon this guy as a high-impact and safe bet for marketing their companies.

But then I remember entire forums full of people that consider Rationality to be a defining trait exist. And I don't think they care much about advertising, seeing how "Effective Altruism" now has hundreds of millons (billions) of dollars and chateaus.

IMO there are other podcasters that have better guests and are much more interesting. Here's one https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w (Curt Jaimungal)


I listen to maybe half the podcasts that Lex releases these days. He has some views that I disagree with and some traits that I find annoying, but overall I still enjoy the podcasts. There are some great guests and he allows them to speak at length on all kinds of topics. That’s the thing, the podcast is not really about him, it’s about the guests.

Also I have literally never heard him boast about being a professor, he basically doesn’t say anything about his work on the podcast at all other than that he works in AI.


He occasionally has guests that say things that anybody with two brain cells can figure out is nonsense, and Lex will simply agree with it. He's simply a bad interviewer.


I think you can be a so called “bad interviewer” and still have a good podcast hah


> He has some views that I disagree with

Yeah, “I will interview Vladimir Putin” — thanks but no :S


>boasts about how he's a professor or something (he doesn't even live near the uni?)

I've listened to at least dozens of his podcasts and I have never heard anything like this. You may be thinking of something else. It does come up that he used to work with self-driving cars but only when it's relevant, it doesn't come across as boasting IMO (he doesn't even claim to be an expert, only familiar with the main ideas of the time).


It's maybe personal preference, but I find Lex's guests more interesting compared to that. I'm specifically interested in deep learning, neural networks, generative models, but also neuroscience and some related areas. Then I'm also interested in some tech topics, e.g. everything John Carmack talks about, etc. Lex basically interviewed all the big names in those areas.


The episodes are really hit or miss. I really enjoyed the interviews with John Carmack and with Guido van Rossum. I started several other episodes that I just bailed on, though, and it's a big time investment just to get to the point where you realize you are wasting your time. So at this point I won't go out of my way to listen to episodes but I will go listen to one when it's getting talked about elsewhere.

The less Lex is talking and the more the person he's interviewing is talking, the better.

BTW if anyone who enjoyed the van Rossum and Carmack interviews has any suggestions on other episodes I might like, I'd love to have them.


He's just a bad interviewer.

An example I've noticed: A guest will have essentially addressed a question Lex has planned for later in the interview, but Lex still asks it without any sort of modification/acknowledgment of what the guest has already said, leading to a lot of repetitive talk.

He also basically ignores statements that conflict with his view.


I suspected something was bad, when Fridman managed to interview Demis Hassabis on the topic of "AI, Superintelligence & the Future of Humanity" for more than two hours in July 2022, and the topic of large language models never came up. I don't know the reason. Someone has suggested it was because it was super secret and Hassabis made a deal with Fridman before the interview to not talk about it but ... idk.


I've only watched a couple of his interviews but at least in the ones I watched, which I think this one might have been one of them, he was kind of asking experts to confirm his very positive opinions about Elon/Karpathy as opposed to focusing on stuff they were doing. I find it very strange that he is so famous and popular


> because it was super secret

What was super secret? LLMs had been around for a while on July 2022. Also, all of the hype around LLMs since then is due to OpenAI's releases, so why would Hassabis be opposed to discussing it?


Yes, if you read again I hope you will see that we agree!


"But... idk" was at least a bit ambiguous. It could range from "this is a conspiracy theory that makes no sense" to (my original interpretation) "I'm not sure if this is true or not, what do you guys think?"


He's kind of a useful idiot in the sense that he's garnered a large audience/influence, but lets the guests own that audience for the interview's duration.

It's why you see guest execs interested in controlling a narrative like Bourla, Zuckerberg, Altman...


Thank you for explaining. Makes total sense now. Tried to listen to it and found it quite annoying because the interviewer gave no direction to it nor did care if a question was actually answered. Maybe I can even listen to it now accepting that simple fact.


Bourla was by far the most controversial in the interview not acknowledging that the vaccine has risks and side effects.

I took it as fast as I could because the researchers at the companies did a great job, but still how he talked about it was unscientific.


Hard disagree. Maybe a little monotone in the literal sense, but he’s very knowledgeable and asks insightful questions.


I think he's an excellent interviewer. My favorite podcast series by far. And even if he wasn't, I wouldn't express my disdain in such a harsh way. It's entirely unnecessary.


From the guidelines: Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.


Hard disagree with the guidelines, this comment prompted lots of great discussion.


There is no need to prompt valuable reaction through cheap triggers.


Lex interviewing MrBeast a couple months ago was the most unintentionally hilarious interview I can remember. Lex could not engage MB on _anything_, they speak entirely different languages. I don’t expect MB to be a genius, but Lex couldn’t modulate his style to connect which I thought reflected badly. Like watching two beached dolphins play table tennis.


I thought it was just me with this opinion. I have always thought his style reminds me of a high school kid learning how to interview, just very amateurish. Not sure how he really ever got so popular..


Personally I like him because he is amateurish. I think you'll find that people can relate to amateur level discussions because most people are amateurs.


By more popular people, such as Joe Rogan, who has had him on several times and seems to be a "good friend."


>99% of people will look like a child next to most of the people he interviews.


  Lex Fridman: charisma is a dangerous thing. Flaws in communication style is a feature, not a bug in general, at least for humans in power. 
https://youtu.be/L_Guz73e6fw?t=5533


My biggest issue is how he’s often trying to pseudo intellectually (and imo disingenuously) push right wing agenda as if it’s coming from a place of sophistication (like here the first thing he did was bring up of all people Jordan Peterson doing one of the dumbest experiments with ChatGPT, something even my 10 year old niece would say is dumb and cringe inducing, and asked Sam to comment on it).


He might be boring, but he’s very good natured and interested in others. You would be amazed how hard it is to be genuine in this way. It’s also what makes people open up in an interesting way.

This is not a quality I know in many people, it’s very rare. It also doesn’t make him look good as an interviewer, but it does make for good interviews.

I also find the calibre of his guests much more appealing than many other podcasts.


i find cats to be annoying, but i don't go around commenting that on cat videos


But you do comment on comments about cat videos.


It's hard not to resist commenting on comments on comments about cat videos.


unless someone's pointing a laser at it and wiggling it around


Lots of good insights in this podcast.


Off topic: did Sam get a face-lift? Or is that just tons of Botox?


I rolled my eyes when I saw your comment, then I watched the video. The lips and cheeks are distinctly botox'd.


Perhaps he's beginning his transformation into the Third Bogdanoff


Looks like he had his nose done, has definitely had Botox - I don’t think he’s had fillers, looks like steroid puffiness. Probably adjusting to something hormonal, might be getting over COVID or similar with some help.


Looks like botox and fillers to me


Yeah he's almost unrecognisable from his Hacker News interviews.


Why would you get botox at 37? Unless it's medical.


There's supposedly an increasingly common practice of starting early, so the wrinkles in those areas don't ever develop in the first place...

I imagine it also prevents a more abrupt change later on if you're intending to do it. Going from an expressive wrinkled face to a paralyzed one in your 50s would be much more jarring for your peers, than just lacking expressiveness much of your adult life.

It's all a rather weird form of vanity to me.


Self-esteem issues


I know lots of people in their late 20s / early 30s who get botox.

I personally don't understand it either, but it's definitely a thing.


What are the reasons you've heard your friends give? I don't know anyone who's done it.


They’re very self conscious about wrinkles, in a nutshell, and they think wrinkles make them look old. These are single / actively dating people in a city, too, FWIW.


Timeline of the Podcast:

    4:36 - GPT-4
    16:02 - Political bias
    23:03 - AI safety
    43:43 - Neural network size
    47:36 - AGI
    1:09:05 - Fear
    1:11:14 - Competition
    1:13:33 - From non-profit to capped-profit
    1:16:54 - Power
    1:22:06 - Elon Musk
    1:30:32 - Political pressure
    1:48:46 - Truth and misinformation
    2:01:09 - Microsoft
    2:05:09 - SVB bank collapse
    2:10:00 - Anthropomorphism
    2:14:03 - Future applications
    2:17:54 - Advice for young people
    2:20:33 - Meaning of life 

Seems like about the first half (up to the 1 hour mark) is interesting enough to warrant a listen, but echoing what ilrwbwrkhv said, Lex Fridman is kind of boring which makes it hard to even sit through the first half. Feels a bit like he has the personality of a fridge.


> Feels a bit like he has the personality of a fridge.

Do other people listen to podcasts based on the personality of the interviewer? I just assume that you listen based on the guest, and the host is just there to ask decent questions, which Lex does quite well.


Yes, I exclusively listen to podcasts based on the personalities of the hosts. I listen because I want to hear them talk, regardless of the topic.

The best episodes of one of my favorite podcasts is about getting a fridge delivered https://www.relay.fm/rd/102 and having a colonoscopy https://www.relay.fm/rd/202

I really enjoy The Verge's interview podcast Decoder, mostly on the back of Nilay Patel's personality and skill he brings to interviews.


Most definitely. The interview-style podcast I listen to the most is Odd Lots [1]. I'll listen to any Odd Lots episode regardless of the guest because of the personalities of Tracy and Joe. I also listen to re-runs of Car Talk [2] (which, I guess maybe isn't a podcast?) because of the hosts.

1. https://www.bloomberg.com/oddlots

2. https://www.npr.org/podcasts/510208/car-talk


An interview is supposed to be a back-and-forth. Otherwise, why not just let some answer questions off a piece of paper?


I think I might prefer that, but it doesn't exist.


Usually I listen to interviews based on the chemistry between the two. If either of them are bland, the entire thing gets thrown off balance.

If it's all about the guest, the perfect interview for you would be someone emailing the guest the questions so they can read them themselves, without involving a second person?


> Lex Fridman is kind of boring which makes it hard to even sit through the first half. Feels a bit like he has the personality of a fridge.

Odd take IMO. Lex clearly has a lot of personality, although he's too much of a romantic for my taste, but he just doesn't have much of a delivery due to his monotonous speaking style. These aren't the same.

I guess I always just find it weird when people equate animated, extroverted personas with "personality".


Very harsh on fridges. The man has no personality at all.


Lex Fridman is one of the podcasters on the planet right now, hosting celebrity guests for long form interviews.


I'm glad Lex chose to spend half of it discussing Jordan Peterson and Elon Musk.

Also as great and useful as the LLMs are, it's still a bit of a leap to claim it's getting us that much closer to AGI. GPT does a great job compressing and distilling the knowledge found in text on the internet but it still doesn't have the ability to do real reasoning. As a simple example I recommend trying to have it multiply two large numbers, it does a decent job guestimating but obviously can't do the recursive steps required to generate a correct answer.

EDIT: Also people worried about AGI implications should slow down and worry what humans and society will do with tools that can automate a large portion of our jobs and accrue all of the gains of that to a few well capitalized players.


This parent post with its EDIT is a perfect example of what I wrote in another comment: Beliefs are changing so fast right now. The term "AGI skeptic" will soon (if not already) mean "I don't trust AGIs in positions of authority or power" rather than "I don't think the technology is capable of matching our level of cognition."


Hang on, we’ve also changed the definition of AGI, now this thing is increasingly “marketed” as an AGI, and as someone has used it, I don’t want it anywhere near critical decision making.

There is marketing and there’s reality. Don’t conflate the two.

There is untold money which has been spent on these systems. They’re going to be pushed into our lives whether we like it or not. That’s not to say there is no value in machine learning, but there also a lot of money to be made from it.

Think opioid crisis, you had it prescribed even if it wasn’t in your interest.

This is America…


>it still doesn't have the ability to do real reasoning.

It technically does. Humans solve problems in a recursive way, much like the generative transformer networks generate the next piece of text. The big difference of course is the depth of reasoning.

My bet is that the next big effort is going to be reduce the size of the models while retaining performance (perhaps using ML tasked to do so). Then once you have that, from the other side there will be more and more specialized hardware with ASIC to run the transformers. Eventually we will be able to run these things at scale and do on the fly transfer learning, which will be about the same as a human growing reasoning skills as they mature.


> it still doesn't have the ability to do real reasoning

I don't think that matters. It does significantly more than algorithms which are designed to simply maximise engagement, and they have revolutionised the online world in the last decade. This will be orders of magnitude more powerful. And given what the simple 'maximise engagement' algorithms have done to various aspects of society and the people who live in it, I don't think this will be good given how little attention seems to be paid to these aspects by those who stand to profit from their use.


It might not matter to whether it's a useful product, but it matters as to whether it is artificial general inteligence or not.


Yes, my post was mostly a response to the first few segments of the interview where Sam talks about how experts in the field didn’t believe in their (openai) focus on AGI.

I’ve been working in deep learning for over 10 years now and was always bullish on the technology but as great of a tool as these LLMs might be, I wouldn’t bet on them getting us to true AGI anytime soon.


Has Lex started to be more critical of Musk or is he still behaving quasi sycophantic?


>As a simple example I recommend trying to have it multiply two large numbers, it does a decent job guestimating but obviously can't do the recursive steps required to generate a correct answer.

Is it worse than the average human at mental arithmetic? I haven't done detailed benchmarks, but for my usecase it only gets maths wrong once it gets way past the range that I could do myself.

Personally, I find the fact that it can't say "I don't know" to be much more offputting than the dodgy maths.


> Personally, I find the fact that it can't say "I don't know" to be much more offputting

This is a problem with their PPO "finishing school" not with their base model. Check Figure 8 in https://arxiv.org/pdf/2303.08774.pdf its base model has an uncannily well calibrated confidence in the correctness of its decisions. Basically, their PPO is for training it to talk to normies, and normies don't like uncertainty.

Reinforcement learning using Proximal Policy Optimization (PPO): They optimize a policy against the reward model using reinforcement learning. For each new prompt sampled from the dataset, the policy generates an output. The reward model calculates a reward for the output, and the reward is used to update the policy using the PPO algorithm.


True, but even with the base model it is hard to go from logits to a confidence level for freeform answers. For multiple choice questions it is easy, since you just look at the probabilities for each of the allowable tokens in the answer (which is presumably what they did in that benchmark) but I have no idea how to do it more generally.


It is absolutely not obvious it can't do the recursive steps to multiply two numbers. In fact, you can just teach it the proper algorithm and ask it to apply it: https://twitter.com/random_walker/status/1600336556425826304...


The feedforward nature of transformers means that it can only do a limited number of computational steps. Having it explain an algorithm is a good way to have it use its output as a tape during inference/decoding, but even then it just fudges things and makes mistakes for largish numbers.


I find Peterson to be an insufferable coot. Musk is obviously more dynamic.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: