Old habits die hard. And engineers are pretty lazy when it comes to interviews, so just throwing the same leetcode problem into coder pad in every interview makes interviews easier for the person doing the interview.
How do you know if one candidate happened to see the problem on leetcode and memorized the solution versus one who struggled but figured it out slower?
It's very easy to tell, but it doesn't make much difference. The best candidates have seen the problems before and don't even try to hide it, they just propose their solution right away.
I try give positive feedback for candidates who didn't know the problem but could make good use of hints, or had the right approach. But unfortunately, it's difficult to pass a Leetcode interview if you haven't seen a similar problem to what is asked before. Most candidates I interview nowadays seem to know all questions.
That's what the company has decided so we have to go along. The positive side is that if you do your part, you have good chances of being hired, even if you disagree with the process.
It doesn’t matter. It’s about looking for candidates who have put in the time for your stupid hazing ritual. It signals on people who are willing to dedicate a lot of time to meaningless endeavors for the sake of employment.
This type of individual is more likely to follow orders and work hard - and most importantly - be like the other employees you hired.
Because if you want to hire engineers then you have to ask engineering questions. Claude and GPT and Gemini are super helpful but they're not autonomous coders yet so you need an actual engineer to vet their outcome still.
I happen to have a background at this interface as well, as the founder of DeepEarth and Ecodash.ai. I can tell you that I would greatly value such experience in collaboration, although I am not currently hiring. While having such a specific interdisciplinary niche can feel limiting, I also see it as a potential superpower in excelling in a very important domain. I'll also add that machine learning and other modeling techniques are a great bridge between classical natural sciences and modern tech today, that I would look for in collaborators. More specifically from the earth sciences, "GeoAI" would be a key focus.
The one thing I got out of the MIT OpenCourseWare AI course by Patrick Winston was that all of AI could be framed as a problem of search. Interesting to see Demis echo that here.
The 50+ filters at Ecodash.ai for 90,000 plants came from a custom RAG model on top of 800,000 raw web pages. Because LLM’s are expensive, chunking and semantic search for figuring out what to feed into the LLM for inference is a key part of the pipeline nobody talks about. I think what I did was: run all text through the cheapest OpenAI embeddings API… then, I recall that nearest neighbor vector search wasn’t enough to catch all relevant information, for a given query to be answered by an LLM. So, I remember generating a large number of diverse queries, which mean the same thing (e.g. “plant prefers full sun”, “plant thrives in direct sunlight”, “… requires at least 6 hours of light per day”, …) and then doing nearest neighbor vector search on all queries, and using the statistics to choose what to semantically feed into RAG.
Hey, thanks for unpacking what you did at ecodash.ai.
Did you manually curate the queries that you did LLM query expansion on (generating a large number of diverse queries), or did you simply use the query log?
I’m glad Ilya starts the talk with a photo of Quoc Le, who was the lead author of a 2012 paper on scaling neural nets that inspired me to go into deep learning at the time.
His comments are relatively humble and based on public prior work, but it’s clear he’s working on big things today and also has a big imagination.
I’ll also just say that at this point “the cat is out of the bag”, and probably it will be a new generation of leaders — let us all hope they are as humanitarian — who drive the future of AI.
Obviously the article is challenging the view — scientific or not — that mitochondria are not living.
Side note: previously I was funded by NSF and NASA to study such questions from biophysics and astrobiology.
That said, this was a delightful read. I did not realize or conceive of mitochondria as, like bacteria in our bodies, independent living networks with unique genomes, evolution, and flows of information and energy.
Reading about the health benefits of “external mitochondria” made me think about when I hug my dog: are we exchanging mitochondria, perhaps?
Restricted Boltzmann Machines were a huge revolution in the field, warranting a publication in Science in 2006. If you want to know what the field looks like back then, here it is: https://www.cs.toronto.edu/~hinton/absps/science.pdf
I remember in 2012 for my MS thesis on Deep Neural Networks spending several pages on Boltzmann Machines and the physics-inspired theories of Geoffrey Hinton.
My undergraduate degree was in physics.
So, yes, I think this is an absolutely stunning award. The connections between statistical entropy (inspired by thermodynamics) and also of course from biophysics of human neural networks should not be lost here.
Anyways, congratulations to Geoffrey Hinton. And also, since physics is the language of physical systems, why not expand the definition of the field to include the "physics of intelligence"?
Yeah I agree with the 2006 Hinton paper. I read it and reread it and didn't get it. I didn't have the math background at the time and it inspired me to get it. And here I am almost 20 years later working on it.
Why are we still interviewing like its 1999?
reply