I am looking to get up to speed with the latest things happening in AI, I use ChatGPT almost everyday and i last used the open AI api for 3.5 last year. I am looking for a tech blogs like HN to keep updated on things AI, I came across https://simonwillison.net/ but it appears fragmented
The poster's looking for articles, so this recommendation's a bit off the mark. I learned more from participating in a few Kaggle competitions (https://www.kaggle.com/competitions) than I did from reading about AI. Many folks in the community shared their homework, and by learning how to follow their explanations I developed a much more intuitive understanding of the technology. The first competition had a steep learning curve. I felt it was worth it. The application of having a specific goal and the provided datasets made the problem space more tractable.
Not the poster you responded to but I learned quite a bit from kaggle too.
I started from scratch, spent 2-4 hrs per day for 6 months & won a silver in a kaggle NLP competition. Now I use some of it now but not all of it. More than that, I'm quite comfortable with models, understand the costs/benefits/implications etc. I started with Andrew Ng's intro courses, did a bit of fastai, did Karpathy's Zero to Hero fully, all of Kaggle's courses & a few other such things. Kagglers share excellent notebooks and I found them v helpful. Overall I highly recommend this route of learning.
I started with this 3 part course - https://www.coursera.org/specializations/machine-learning-in.... I think the same course is available at deeplearning.ai as well, I'm not sure, but I found coursera's format of ~5 min videos on the phone app very helpful (with speed-up options). I was a new mother and didn't have continuous hours of time back then. I could watch these videos while brushing, etc. It helped me to not quit. After a point I was hooked & baby also grew up a bit and I gradually acquired more time and energy for learning ML. :)
fastai is also amazing, but it's made of 1.5 hour videos, and is more freeflowing. By the time I even figured out where we stopped last time, my time would sometimes be up. It was very discouraging because of this. But later, once I got a little more time & some basic understanding from Andrew Ng, I was able to attempt fastai.
i mean yes but also how much does kaggling/traditional ML path actually prepare you for the age of closed model labs and LLM APIs?
im not even convinced kaggling helps you interview at an openai/anthropic (its not a negative, sure, but idk if itd be what theyd look for for a research scientist role)
I learned ML only to satisfy my curiosity, so I don't know if it's useful for interviewing. :)
Now when I read a paper on something unrelated to AI (idk, say progesterone supplements), and they mention a random forest, I know what they're talking about. I understand regression, PCA, clustering, etc. When I trained a few transformer models (not pretrained) on my native language texts, I was shocked by how rapidly they learn connotations. I find transformer-based LLMs to be very useful, yes, but not unsettlingly AGI-like, as I did before learning about them. I understand the usual way of building recommender systems, embeddings and things. Image models like Unets, GANs etc were very cool too, and when your own code produces that magical result, you see the power of pretraining + specialization. So yeah, idk what they do in interviews nowadays but I found my education very fruitful. It was how I felt when I first picked up programming.
Re the age of LLMs, it is precisely because LLMs will be ubiquitous I wanted to know how they work. I felt uncomfortable treating them as black boxes that you don't understand technically. Think about the people who don't know simple things about a web browser, like opening dev tools and printing the auth token or something. It's not great to be in that place.
I don't think it's a good idea to kepp up to date at a daily/weekly cadence, unless you somehow directly get paid for it. It's like checking stocks daily, it doesn't lead to good investment decisions.
It's better to do it more batchy, like once every 6-12 months or so.
Yes this is why I never buy the latest CPUs and try to never run the latest release of any software. Stay a (supported) release or two behind the bleeding edge, and you'll find stuff is more stable. Common bugs and other issues have been shaken out by the early adopters.
and is curated by me/my team. hope that helps people keep up on the video/talk-length form factor (as in, instead of books, though we also have 2-3 hour workshops)
1. Buy O'reilly (and other tech) books as they come out. This will have a lag, but essentially somebody did this research & summarization work, and wrote it up for you in chapters. Note that you don't have to read everything in a book. Also, $50 is a great investment if it saves you 10s of hours of time.
2. Talks on Youtube at conferences by industry leaders, like Yann LeCun, or maintainers of popular libraries, etc. Also, YT videos on the topic that are upvoted/linked.
3. If you're interested in hardcore research, look for review articles on arxiv.
4. Look at tutorials/examples in the documentation/repo of popular ML/AI libraries, like Pytorch.
5. Try to cover your blindspots. One way or another, you'll know how new AI is applied to SWE and related fields. But how is AI applied to perpendicular fields, like designing buildings, composing music, or balancing a budget? Trying to cover these areas will be tougher, because it will be more noisy, as most commenters will be non-experts compared to you. To get a feel for this, do something that feels unnatural, like watch TED talks that seem bullshity, read HBR articles intended for MBAs, and check out what Palantir is doing.
It also becomes easier. If you missed the early 2000s SOAP hype for example you ... just saved a load of time. Maybe stepping back you can avoid langchain etc. and various other workaround tooling and see what wins.
So I'm currently using "OpenCV University"'s playlist on YouTube to get myself up to speed with computer vision, and this has lead into a spiraling staircase down into the depths of CNNs.
And after that, I've had some recent projects that I love to mess around with such as a better license plate detection API than what currently exists for U.K. plates, and once I completed those two courses I had a good enough baseline to work from where I'd encounter a repository and google around if I needed to learn something new.
Short, simple, not painful etc. and I don't have the advanced mathematical background (nor the background within the American mathematical notation) that I'd need to digest the MIT course set, so this learning path has been the best for me. I'm no expert whatsoever, though.
Admittedly, I'm way behind on how this translates to software on the newest video cards. Part of that is that I don't like the emphasis on GPUs. We're only seeing the SIMD side of deep learning with large matrices and tensors. But there are at least a dozen machine learning approaches that are being neglected, mainly genetic algorithms. Which means that we're perhaps focused too much on implementations and not on core algorithms. It would be like trying to study physics without change of coordinates, Lorentz transformations or calculus. Lots of trees but no forest.
To get back to rapid application development in machine learning, I'd like to see a 1000+ core, 1+ GHz CPU with 16+ GBs of core-local ram for under $1000 so that we don't have to manually transpile our algorithms to GPU code. That should have arrived around 2010 but the mobile bubble derailed desktop computing. Today it should be more like 10,000+ cores for that price at current transistor counts, increasing by a factor of about 100 each decade by what's left of Moore's law.
We also need better languages. Something like a hybrid of Erlang and Go with always-on auto-parallelization to run our human-readable but embarrassingly parallel code.
Short of that, there might be an opportunity to write a transpiler that converts C-style imperative or functional code to existing GPU code like CUDA (MIMD -> SIMD). Julia is the only language I know of even trying to do this.
Those are the areas where real work is needed to democratize AI, that SWEs like us may never be able to work on while we're too busy making rent. And the big players like OpenAI and Nvidia have no incentive to pursue them and disrupt themselves.
Maybe someone can find a challenging profit where I only see disillusionment, and finally deliver UBI or at least stuff like 3D printed robots that can deliver the resources we need outside of a rigged economy.
I have stopped following Matt Berman, his enthusiasm and love for OpenAI has unfortunately made him lose all semblance of critical thinking when it comes to this company, so his videos are too aligned with OpenAI marketing...
He's absolutely very enthusiastic about many things! I take that into account and mentally adjust when listening to him, but he's been great as [one] source of AI product news.
i admire the youtubers a lot and often wonder if i should be venturing into that domain. youtube takes a lot of work but also has the greatest reach by far.
First thing you need to do is change your LinkedIn to “AI evangelist” then go to your boss and say I want triple the pay. Then let the chips fall where they may. Oh also rename all your GitHub or personal projects to have AI in the name. You don’t actually have to do much else.
As an aside, does anyone have any ideas about this: there should be an app like an 'auto-RAG' that scrapes RSS feeds and URLs, in addition to ingesting docs, text and content in the normal RAG way. Then you could build AI chat-enabled knowledge resources around specific subjects. Autogenerated summaries and dashboards would provide useful overviews.
<< there should be an app like an 'auto-RAG' that scrapes RSS feeds and URLs,
I am not aware if that exists yet, but the challenge I see with it is rather simple: you get overwhelmed with information really quickly. In other words, you would still need human somewhere in that process to review those scrapes and the quality of that varies widely. For example, even on HN it is not a given a link will be pure gold ( you still want to check if it fits your use case ).
That said, as ideas goes, it sounds like a fun weekend project.
The best place for the latest information isn't tech blogs in my opinion. It's the stable diffusion and local llama subreddits. If you are looking to learn about everything on a fundamental level you need to check out Andrej Karpathy on YouTube. There other some other notable mentions in other people's comments.
Are you wanting to get into LLMs in particular or something else? I am a software engineer also trying to make headways into so-called "AI", but I have little interest in LLMs. For one, it's suffering from a major hype bubble right now. The second reason is that because of reason one, it has a huge amount of attention from people who study and work on this every day. It's not something I have the time commitment for to compete with that. Lastly, as mentioned, I have no interest in it and my understanding of them leads me to believe they have few interesting applications besides generating a huge amount of noise in society and dumping heat. The Internet, like blogs, articles, and even YouTube, are already being overrun by LLM-generated material that is effectively worthless. I'm not sure of the net positive for LLMs.
For me personally, I prefer to work backwards and then forwards. What I mean by that is that I want to understand the basics and fundamentals first. So, I'm, slowly, trying to bone up on my statistics, probability, and information theory and have targeted machine learning books that also take a fundamental approach. There's no end to books in this realm for neural networks, machine learning, etc., so it's hard to recommend beyond what I've just picked, and I'm just getting started anyway.
I wrote a couple of these books (and published them as open access)
Here's my one on computation probability. The code and math here underlie "AI". It's the same fundamentals, and even code libraries (Jax, pytorch etc(
https://bayesiancomputationbook.com/welcome.html
I also posted my more specific guidebook to the fundamentals of GenAI above. Hope both help
It has a mix of concepts and hands on code, and lots of links to the best places to learn more. I'm keeping it up to date as well, about to merge a guide on building applications, which is what it sounds like you want.
We actually just wrote a book with your profile in mind -- especially if by "AI" you're especially interested in LLMs and if you're a visual learner. It's called Hands-On Large Language Models and it contains 300 original figures explaining the main couple hundred intuitions and applications for these models. You can also read it online on the O'Reilly platform. I find that after acquiring the main intuitions, people find it much easier to move on to code implementations or papers.
As I was building up my understanding/intuition for the internals of transformers + attention, I found 3Blue1Brown's series of videos (specifically on attention) to be super helpful.
Ollama is like cpanel for models. It’s not going to familiarize you with lower level implementation which is just as important as knowing the math.
That was my approach. Being aware of the internals not just the equivalent of “git pull model” got me a job, without a CS degree and a long career in software. Ymmv
Every six months or so I write something (often derived from a conference talk) that's more of a "catch up with the latest developments" post - a few of those:
I read about 30 LLM papers a couple months ago dated from 2018-2024. Mostly folks are publishing on the “how do we prompt better” problem, and you can kind of get the gist in about a day by reading a few blogs (RAG, fine tuning, tool use, etc). There is also more progress being made for model capabilities, like multi modality, and each company seems to be pushing in only slightly different directions, but essentially they are still black boxes.
It depends what you are looking for honestly “the latest things happening” is pretty vague. I’d say the place to look is probably just the blogs of OpenAI/Anthropic/Genini, since they are the only teams with inside information and novel findings to report. Everyone else is just using the tools we are given.
Lots of people can get impressive demos up and running, but if you want to run AI products in production, you're going to have to do system evals. System evals make sure your product is doing what it says on the box with unquantifiable qualities.
Simon's blog is fragmented because it's, well, a blog. It would be hard to find a better source to "keep updated on things AI" though. He does do longer summary articles sometimes, but mostly he's keeping up with things in real time. The search and tagging systems on his blog work well, too. I suggest you stick his RSS feed in your feed reader, and follow along that way.
Swyx also has a lot of stuff keeping up to date at https://www.latent.space/, including the Latent Space podcast, although tbh I haven't listened to more than one or two episodes.
He was only gone for a few days, IIRC. At any rate, he's back publishing AI related content again, and it looks like all (?) of his old content is back on his YT channel.
honestly his channel quality is notably different than the other 2 you mentioned. i'm vaguely curious what you get out of it that makes you put him on the same tier.
I think you replied to the wrong person. I didn't put DaveShap on any tier or anything.
That said... I will say that in one of my other replies I did mention that some YT channels in this space can be a bit tabloid'ish, and I may have had Shapiro partly in in mind when saying that. But I still subscribe to his channel and some similar ones, just to get a variety of takes and perspectives.
For news-like content I follow accounts on X: @kimmonismus @apples_jimmy and the accounts of Antropic, Mistal, Gemini / DeepMind and OpenAI.
I think everyone who is really interested in the hot AI developments must also follow what comes from China. I follow https://chinai.substack.com/ but I am open to hear about other Chinese resources.
Machine Learning Mastery (https://machinelearningmastery.com) provides code examples for many of the popular models. For me, seeing and writing code has been helpful in understanding how things work and makes it easier to put new developments in context.
My issue with YouTube channels that focus on AI news is that they’re heavily incentivized to give you a frequent stream of attention-grabbing news. Week-by-week updates aren’t that helpful. It’s easy to miss the bigger picture and there’s too much content to feel like a good use of time.
Unwind AI would be helpful. They publish daily newsletters on AI as well as tutorials on building apps with step-by-step walkthrough. Super focused on developers. https://www.theunwindai.com/
Lots of good suggestions here already. I'd start by adding one quick note though. "AI" is more than just LLM's. Sure, the "current, trendy, fashionable" thing is all LLM's, but the field as a whole is still much larger. I'd encourage you to not myopically focus on LLM's to exclusion. Depending on your existing background knowledge, there's a lot to be said for going out and getting a copy of Artificial Intelligence: A Modern Approach and reading through it. Likewise for something like Hands-On Machine Learning with Scikit-Learn, Keras, and Tensorflow.
Beyond that: there are some decent sub-reddits for keeping up with AI happenings, a lot of good Youtube channels (although a lot of the ones that talk about the "current, trendy" AI stuff tend to be a bit tabloid'ish), and even a couple of Facebook groups. You can also find good signal by choosing the right people to follow on Twitter/LinkedIn/Mastodon/Bluesky/etc.
There is a favorite link on the original post. You can also save the content using a variety of methods, such as Pocket, or paste it into a tool like Obsidian or similar.
Maybe you should try google instead of being so condescending, and compare the first 2 pages' results with this page...
We are not exactly talking about big secrets. We are talking about "llm learn resources" keywords - which apparently needs handholding in 2024. And "acknowledging the value of the community".
https://news.ycombinator.com/item?id=36195527 and
Hacker's Guide to LLMs by Jeremy from Fast.ai - https://www.youtube.com/watch?v=jkrNMKz9pWU
State of GPT by Karpathy - https://www.youtube.com/watch?v=bZQun8Y4L2A
LLMs by 3b1b - https://www.youtube.com/watch?v=LPZh9BOjkQs
Visualizing transformers by 3b1b - https://www.youtube.com/watch?v=KJtZARuO3JY
How ChatGPT trained - https://www.youtube.com/watch?v=VPRSBzXzavo
AI in a nutshell - https://www.youtube.com/watch?v=2IK3DFHRFfw
How Carlini uses LLMs - https://nicholas.carlini.com/writing/2024/how-i-use-ai.html
For staying updated:
X/Twitter & Bluesky. Go and follow people that work at OpenAI, Anthropic, Google DeepMind, and xAI.
Podcasts: No Priors, Generally Intelligent, Dwarkesh Patel, Sequoia's "Training Data"
reply