"Frankly, what surprises me is that the AI community is taking this long to figure out concepts. It doesn’t sound super hard. High-level linking of a large number of deep nets sounds like the right approach or at least a key part of the right approach."
Genuine question I've always had is, are these charlatans conscious of how full of shit they are, or are they really high on their own stuff?
Also it grinds my gears when they pull out probabilities out of their asses:
"The probability of DeepMind creating a deep mind increases every year. Maybe it doesn’t get past 50% in 2 to 3 years, but it likely moves past 10%. That doesn’t sound crazy to me, given their resources."
Amongst people who think probabilistically, this isn't a weird statement. It's a very low precision guestimate. There is a qualitative difference between 50-50, 90-10, 99-1 etc, and it's only their best guess anyway.
Just because you can generate numbers between 0 and 1 doesn't make them meaningful probabilities.
Are these based on data? No, they're rhetorical tools used to sound quantitative and scientific.
Nobody will be applying the calculus of probability to these meaningless numbers coming out of someone's ass.
And most importantly, is he himself willing to bet a significant fraction of his fortune based on these socalled probabilities? I don't think so. So they're not probabilities.
A conversation like this happens every minute between investors, people who work at hedge funds, trader types, bookies, people who work in cat insurance etc. They just think this way. They are "priors" in Bayes world. Based on intuition. Notice the lack of precision. Nobody says "50%" or "10%" to sound scientific. I'm 99.9% certain it's better than using ambiguous terms "likely", "probably", "certainly possible" and so on.
> Frankly, what surprises me is that the AI community
I came here thinking about this exact part. Well, many of them, but this one in particularly.
What surprises me about Elon is how much he can talk about other peoples' work without doing any of it himself. And yet each time I hear him talk about something I'm well-versed in, he sounds fairly oblivious yet totally unaware of that fact.
His go-to strategy seems to be hand waving with a bit of "how hard could it be"?
I think people kissing your ass all day and viewing wealth as a sign of intelligence (which would make Musk the smartest man in history by a lot), it’s more understandable.
This is common supposedly in models of grandiose narcissism, which is a subtype that is associated with having leadership traits (agentic extraversion). Not saying anyone has it or that it is necessarily a bad thing, but it might lead you to explore some insights into traits that lead to this type of behavior.
"Frankly, what surprises me is that the AI community is taking this long to figure out concepts. It doesn’t sound super hard. High-level linking of a large number of deep nets sounds like the right approach or at least a key part of the right approach."
Genuine question I've always had is, are these charlatans conscious of how full of shit they are, or are they really high on their own stuff?
Also it grinds my gears when they pull out probabilities out of their asses:
"The probability of DeepMind creating a deep mind increases every year. Maybe it doesn’t get past 50% in 2 to 3 years, but it likely moves past 10%. That doesn’t sound crazy to me, given their resources."