Hacker News new | past | comments | ask | show | jobs | submit login
Deep Learning Foundations to Stable Diffusion (fast.ai)
615 points by noob_eng on April 5, 2023 | hide | past | favorite | 113 comments



Hi folks. Nice to see our new free (and ad-free) course here on HN! This course is for folks that are already comfortable training neural nets and understand the basic ideas of SGD, cross-entropy, embeddings, etc. It will help you both understand these foundations more deeply, since you'll be creating everything from scratch (i.e. from only Python and its standard library), and to understand modern generative modeling approaches.

We study and implement a lot of papers, including many that came out during the course, which is a great way to get practice and get comfortable with reading deep learning literature.

If you need to get up to speed on the foundations first, you should start with part 1 of the course, which is here: https://course.fast.ai

If you've got any questions about the course, generative modeling, or deep learning in general, feel free to ask!


not to derail the conversation...

I came across your APL study session videos while exploring the other material. I used APL professionally for about a decade back in the early 80's. I am always pleasantly surprised when I see interesting work being done in APL. Interestingly enough I always thought APL would eventually evolve into a central language for AI. There were attempts at designing hardware-based APL machines way back when. Of course, much as the language, they were ahead of the technology of the times.

I don't do much with APL these days. I do keep it around for quick calculations and exploration, far more so than doing any projects. In many ways, a thinking tool of sorts.


Not criticizing you, here, just asking a sincere question.

It was obvious to me on first encounter that APL would never become widespread. Its character set was too abstruse for most programmers and at the time required a special monitor. Plus it seemed hard to maintain if you weren’t either a mathematician or a full-time APL developer.

Your thoughts?


Not OP, but I'll say you're right, but for the wrong reasons. Having embarked on learning APL by doing a project in it, the character set becomes second nature in a few days time. No special monitor needed, at least on my Mac, just a new font and it all worked flawlessly.

What caused me to hang up my glyphs were the inconsistencies and head scratching behavior of the language in so many corner cases. Working with very nice mentors I found that one could get around them, but because of the legacy of the language they have to be kept around to support existing code bases.

I loved the notion of having a language that gave a first class experience with matrices, though, and after looking around the space, I finally came to Julia and have been very happy.


Thanks for your perspective, which is easy to understand. I hasten to say that when I talk about special monitors, I first encountered APL in the early 80s.


I have recently read "The Little Learner" and was quite amazed how much I knew already just because I know APL/J. With the right libraries, these vector languages would be perfect for neural network / deep learning tasks.


I hope that one day you will do another part about LLM, in pytorch, that would be great :-)

Thanks for this material.


There are plans for an LLM course but very early ideas, stay tuned!


Thanks Jeremy, this looks awesome. Do you think there's much value in learning much of the math in addition to the material in these courses?


The math is covered in this course. We look at both ordinary and stochastic differential equations, and at Langevin dynamics / score matching, as well as the needed probability foundations. We only cover the necessary bits of these topics -- just enough to understand how diffusion models work (e.g. we don't look at the physics stuff which is where Langevin dynamics came from).

On the whole, we cover the needed math at the point where we use it in the course. But there's also a bonus lesson that focuses on just the mathematical side, if you're interested: https://youtu.be/mYpjmM7O-30


Excellent, thanks for the thorough response!


Maybe odd question, but: would you recommend taking this course if my goal is to build (and sell) products leveraging ML? (e.g. SaaS)

As in, with the pace of improvements from other AI startups and general availability of their APIs (e.g. GPT-4), is there a specific advantage (aside from maybe cost) to learning to build my own models? Or is the course more suitable for people wanting to become ML engineers (or similar) and to find a job as such? Thanks in advance


Part 1 would probably be best for that: https://course.fast.ai . Lots of alums have gone on to do what you describe -- it's probably a good idea to have the level of understanding introduced there even if you mainly use external APIs. Part 2 is more for folks who want to go deeper in order to try new research ideas, optimize their models beyond standard approaches, etc.


Just my two cents, as someone with 8000 academic citations and one AI SaaS exit, on AI entrepreneurship:

The more time you spend on marketing, the better.

We found that as our AI got worse, our product got better.


Thanks for your reply, though I'm not sure if I get your comment correctly. As in, I agree that marketing is super important but if I had an idea about a certain SaaS product that requires a certain ML model, I need to decide either to build it myself, or using somebody else's APIs.

> We found that as our AI got worse, our product got better.

Interesting. Could you elaborate on this too?


I guess my last sentence was confusing. What I meant was that we fell into the common trap that scientists want to do as fancy science as possible when they leave academia and enter the startup world. Because there's an issue of pride and desire for uniqueness. Whereas good business is not like art, where the mandate is to be as creative and unique as possible. The analogy from sports (which I don't watch) applicable to business is that teams just copy each other's plays and focus on out-executing each other.

So, specifically, our startup started with the idea that high-brow tech would be a key differentiator and give the best user experience. This is the common story trotted out by survivorship bias stories that make good tech news articles.

Whereas making cuts to our R&D time and focusing on UI/UX and working with simpler science ultimately led to better product.

From consulting, sales, and corporate work, one learns that the dirty secret of big-iron large tech companies is that all stuff sold as ML is just nicely packaged logistic regression. Or it was was five years ago. Nowadays I guess it would MAYBE be transformers, but the point being that off-the-shelf ML with solid data engineering work is what drives 99% of good products. Rarely is it truly innovative tech. I think Pete Skomoroch was the one that joked: "People say I'm a data scientist. I'm actually a data plumber."

I guess one could take this lesson from science. What you learn from a good PhD advisor is: a) read the latest work b) note the simple baseline approach constantly trashed as scoring 2% worse than the sophisticated intricate new things proposed and c) implement the simple baseline. Achieve impact not by hillclimbing on a standard metric but define a new problem or arbitrage insights from adjacent fields, etc.


I see. Much clearer! Thanks for the explanation.


Could you elaborate on your last sentence?


> We found that as our AI got worse, our product got better.

I’m not smart enough to decode this. I can imagine multiple conflicting answers.


How long does it take to go through the course? I know the on-demand are self paced, but are we talking “expect 1 lecture a week full time”?


Based on the last time we did a part 2 course, time spent varied a lot. Some folks could do it in 10 hours study (plus watch time) per lesson, and some took a whole year off to do the two parts of the course (including doing significant projects).


Thanks Jeremy for putting this course out for everyone, looking forward to completing it.


> since you'll be creating everything from scratch (i.e. from only Python and its standard library)

Is this correct? I skimmed the notes for both parts 1 & 2 and they stated requiring PyTorch?


Yes, after we create something from scratch, we then see how to use implementations from libraries such as PyTorch. For basic operations like convolution, whilst we do create a GPU-accelerated version, it's not as fast as PyTorch/cudnn's highly-optimised version, so we use the PyTorch implementation for the rest of the course, rather than our handmade version.

So that means that nothing is mysterious, since we know how it's all made, but we also see where and how to use existing libraries as appropriate.

In practice, during the course we end up using PyTorch for stuff like gradients, matrix multiplication, and convolutions, and our own implementations for a lot of the stuff that's at a higher level than that (e.g. we use our own ResNet, U-net, etc.)


Something of this sort is also done by Andrej Karpathy in his Zero to Hero Series:https://karpathy.ai/zero-to-hero.html


Ah, I've been waiting for this, looking forward to going through it!

While I appreciate your top-down approach and buy your arguments for doing it this way in previous courses, I also like this new bottom-up where one really learns what lies beneath.


Amazing, that's exactly what I've been looking for and what I'm sure many here will appreciate. Thank you!


Wow, you are really fulfilling the promise of a pragmatic approach. Awesome!


Thank you Jeremy, it is an amazing course!


Thank you for putting out this course!


I just finished this! My thoughts:

I recommend it. I feel I can now read an arbitrary paper, frown a lot, and eventually understand what it's talking about - to the point where I can implement my own buggy version. And hey, I built my own stable diffusion!!

I found the previous version of this course[1] to be a good complement: it's older (predates SD) but I feel it explains core concepts slightly better. Very understandable given how the close to the bleeding edge this new version is...

Perhaps an even better complement was Karpathy's famous course[2] - similar material but builds towards GPT instead of SD. The fastai coding style is somewhat esoteric (to me) so it was helpful to contrast with Karpathy's more familiar style. I recommend doing both courses. Also I believe the fastai folks are planning a part 3 which covers LLMs; looking forward to that.

Concepts from part 2 helped my hobby project, a 7-day forecast of renewable electricity and power price[3].

Feels pretty great to have built my own Stable Diffusion and GPT! I am grateful to Jeremy and Andrej.

[1] https://course19.fast.ai/part2

[2] https://karpathy.ai/zero-to-hero.html

[3] https://greenforecast.au/


Do you recommend doing fastai and karpathy simultaneously or one at a time?


Depends. Some would benefit from simultaneous, others sequential. I did them in "chunks" starting with fastai but that was more driven by the release schedule. Personally I'd recommend trying both and seeing which style you prefer, focus on whichever one makes your more excited to get your hands dirty and play with stuff.


I second the idea of doing older versions of Part-2 as well.

One will get much better grasp of DL concepts doing those versions rather than this.

E.g. 2019/20 version IIRC.


Damn good (and wryly funny) assessment. Thanks much!


How long did it take you?


I didn't really track... apparently there's 30-ish hours of video, but lectures are just the beginning. The real learning happens when you play and build. The first lesson was released in October I think.


I audited this course - I say “audited” because I didn’t have time to do the homework; I just watched the lectures and spent what time I could playing around in Colab building some toy models. This is very much a full time university level course and you should treat it that way if you intend to complete it properly. You will need to reserve 10 hours per week for the homework - at least.

I enjoyed how Jeremy and his teaching assistants step through every detail so that you can understand how these fantastically complex systems actually work, from the ground up. Nothing is glossed over or taken for granted.

Much of the course material is building an AI programming framework from scratch. It’s well worth the hassle; I now at least have an intuition about how these systems work, rather than a glossed over impression with many holes. The other thing I greatly appreciated were the insights from other classmates in the forums. Some people way smarter than me took this course and Jeremy would incorporate their flashes of insight into the lectures.

I give this course a 10/10 and I hope my life gets a little easier so that I can get back to the lessons and finish the homework one day.


> You will need to reserve 10 hours per week for the homework - at least.

Is that for part 2? Or also for part 1? How long did it take you, doing it 10 hours per week?


For both. Although you will be much faster in part 1 if you know the basics of deep learning. But then again you can go so much deeper on certain topics and explore maybe some tangents to the content of part 1.


I didn't take part 1, so I can't comment intelligently on this.


I have worked on the course with @jph00 and it's been an incredible experience! This is one of the most cutting edge courses about deep learning and diffusion models. Highly recommend folks check it out!

Link for more info: https://www.fast.ai/posts/part2-2023.html


I just used fastai last week for an experiment at work. I have no to little DS experience, but quickly skimmed through part1.

Quite easily I managed to use a pretrained resnet to do some quite astonishing things on completely unrelated pictures we have. And it only took like 10 epochs at ~4 seconds each, so less than a minute of training and it was already quite good.

What I did was that we have taken pictures of things during a process. At that point, I know what the volume should be. So used ~thousand of those images to train a regression model to estimate volume of the thing in the image.

Then I run the model on other images and compare what the model thinks is the volume compared to what it should be. When it's off, the algorithm is mostly correct and we have an error on our side.

Quite astonished about how quick and easy this was to do with the high level api of fastai. Took me 2-3 days.


This is cool, but I have a fundamental question. Why learn machine learning when all the machines will learn how to create and control themselves and powerful elites will offload the burden of governments to control us?

Or we race ourselves to fulfill the last jobs on earth? Is this progress for humanity or enslavement and the end of our species? I am serious. This question is honest. Maybe my IQ is too low to understand the logic of this innovation.

Maybe I have seen too much tech implementation doing bad stuff to humans, who knows? Please, enlighten me.


This question isn't specific to machine learning - if you've given up already, don't bother learning anything and see how it goes.


I guess that the fear, be it well founded or delusional, is fueled by the idea that a hammer for example have a lesser potential to inadvertently beat humans at civilizational level.


I'm not questioning the fear at all. My point is that you should do the best you can with what you're given, and lifting up your arms and saying that you aren't going to learn anymore isn't doing the best you can.


He didn't say or imply he had given up and I found the question quite interesting.


I never give up. Just asked for information, to fight my own bias. Obviously there is a world outside which will have analogue and mechanical parts. But this is not the question.

The question is why to cut your wrists with a blunt object and stream the event for validation is suddenly a rational behavior?


Are you a LLM or something? :)


Yes. Like you.:)


If you don't need to learn anything and can fulfil and sustain yourself then I wouldn't. The reality is that's not the case for most people. You also don't need to learn things with the expectation of a job.


I'm learning this stuff because I think it's fascinating.

To be fair I know I'm light years behind the cutting-edge so I feel I can sleep assured that I won't take too much of the blame for having created our new AI overlords, or taking the jobs, etc.

Also, generative art and LLMs are undoubtedly impressive, but not completely my cup of tea. But you can always take your ML knowledge and find a domain/problem you care about to apply it to. It's not like literally everything will be solved in the future.


I think it's all to play for, the future that we create using AI technologies. If we don't engage though as citizens then it probably is game over, yes.

The folks at OpenAI like Sam Altman are certainly very aware of the potential hazards of LLMs and advanced AI generally, and Elon Musk famously is. The fact is though, these technologies exist and we can't bury our heads in the sand. There is also an enormous upside if we, as a society, marshal the technology and regulate it properly.

I think a degree of nervousness is an appropriate reaction, I'm a little nervous. I'm also excited about the potential. I also see it as a somewhat inevitable evolution of computers. From at least the early 1950s this was "the plan" in a sense. Alan Turing certainly thought so.

But yes, nervousness is warranted. We must tread carefully.


You've guessed it right, AGI is a long way off (if ever) but you're not exactly a general intelligence either - yes, you are fucked.

It's not like I could tell Chat GPT "I want a fully featured browser" or "I want a machine learning framework" and it'll just come up with a working Chrome or PyTorch. But those are way, way, way outside your abilities they might as well be magic as far as you're concerned.


Very informative and polite response. Thank you.


I don't get why your question is getting unvoted. This is a serious question. For engineers and "normal people" as well : why bother learning ?


That's not a serious question, it's the very definition of trolling. You can replace Stable Diffusion with any other subject to learn, and that particular doom scenario with a similar one or, if you don't feel very inclined to ellucubrate, just say the good old "why learn anything when we all are eventually going to die and be forgotten".

Not cool.


I can empathize with this question.

When I go to learn something for my career I find myself asking how much time I should actually dedicate to it when a LLM could do so much better.

I want to learn deeper and understand things deeply - but if my time could be spent instead on surface level things and using LLMs to fill in the gaps - it makes it difficult to motivate myself to find time.

I still am trying to learn things deeply, I've had this course bookmarked for a while now and have gone through it a bit.

I guess for me the core fear I have is that the programming field is fast paced. I could learn something deeply, but someone with a surface level understanding and using an LLM can maybe surpass me. Will an employer pick whoever understands something deeply, or whoever has the solution first?

Maybe then the core question here is this: As LLMs improve, is it worth it to learn things deeply for your career, or is it better to gain a surface level understanding and defer to an LLM?

There is also the argument that understanding something deeply will allow you to use an LLM much better than someone with a surface level understanding... but I still have that fear that that may not matter anymore.

I'll still learn things for fun though, so at least I have that!


Please, tell me your definition of trolling. Maybe after you help me understand, I will explain to the illustrators why they must prompt with text instead of drawing?

Assuming that I don't learn is wrong. I have a local installation of SD with a lot of models to test. The only useful thing in this gizmo is the Control Net module or maybe the Photoshop plugin for outpainting. You can upload your linear representation (sketch) and generate. But in essence, this is not progress.

I don't extend my drawing ability or produce any original work by synthesizing all the available artworks. The long term negative effects are obvious. After 3 to 5 years, kids will not bother to draw at all. In my view, this is not progress for humanity, there are a ton of scientific data about the importance of drawing for development of the mind.

The same applies to every human form of creation. You cannot beat the ultimate "calculator" with a model trained on all human intellectual production. Don't get me to start on corporations and greed and how they will view the necessity of human labor.


Please, tell me your definition of trolling.

You enter a room where people are discussing a technical subject, trying to find ways to collaborate and have a better understanding and then you start giving your speech about the dark future of humanity, education, poverty and politics.

Picture yourself in the physical world doing that and imagine the reactions. Why do you think it's acceptable online?


First answer for the trolling "argument": Because of freedom of speech.

Second answer: I am quite capable, technically speaking, to assess the technology. I don't see the benefits for humanity in A.I. "art" generators. And implementing A.I. outside the narrow use cases, which must be regulated in a form close to how we regulate nuclear energy, is not a good thing. Exponential growth is a reality with this one.

Outside the hype cycle, a lot of specialists are ringing the alarm bell already.

Dismissing their expertise because it represents an obstacle to startup and corporate ROI is not a form of rational thinking.

On the other hand, the signs are clear and finally the lack of empathy and ethics in tech industry will come to fruition.


Humans did not stop learning to count after the calculator was invented.


accusing other people of trolling when they're not is trolling


It's going to be very useful to know how all this stuff works so you can apply it or even just understand it, it's strengths and weaknesses. The is still a lot of stuff to happen and be done before any of it near enough useful and safe and reliable to be just let loose, if anything will be more and more need for "humans in the loop" as we maybe delegate some stuff to be a bit more ML assisted or even semi autonomous.

The original question is a bit "why bother with anything?" so a little hard to answer I think the way it was phrased.


1. Because it's fun.

2. Because "all the machines will learn how to create and control themselves and powerful elites will offload the burden of governments to control us?" is a ridiculous sci-fi scenario, and the chatbots aren't nearly as good as so many people pretend they are.


The rational options are to embrace the change, or violently >and materially< oppose it. Sitting on ones hands is only sensible if you don't think it's actually a threat.


The future John Connor surely needs that kind of knowledge to re-program the T-800. So does Neo if you expect the future to be not quite that violent.


Maybe this is the most rational answer. We are flawed creatures, vanity, self-importance, greed, power. Who will watch the "watchers"? The good intentions are behind wars, behind removing gain of function research. But in reality, there is always a chance of some "lab" to leak the "agent" of destruction.


> Or we race ourselves to fulfill the last jobs on earth?

I find the gitlab employees (developers) being excited over copilot to be almost the definition of insane

short term gain, but in the long term it destroys demand for their own product

developers that don't need to be employed don't need a subscription to github

automating your own company/business out of existence


Depends what people are motivated to do - if you just want to build cool things quicker, you will probably be excited by ML.

If you like writing algorithms and enjoy the mental problem-solving aspect of it, then you might not like it.

If your main motivation is to protect your job/livelihood and ensure your existing skillset is in demand, then you will probably be worried.

But in any scenario, the cat is out of the bag - you can't un-invent it so might as well get excited and be on the train, rather than be the person that gets left behind.


> But in any scenario, the cat is out of the bag - you can't un-invent it so might as well get excited and be on the train, rather than be the person that gets left behind.

there are plenty of other ways to deal with it

politically push for AI output to be banned, made un-exploitable or highly taxed

or the luddite approach

time will tell how the several billion people about to be made destitute will react


Pushing for AI output to be banned is not a long term approach IMO unless all countries do it in unison - countries who do not ban AI output will be much more competitive, and those that ban it will be left in the dust. Any country that banned computers in 1979 would have really hurt themselves.

Society will need to adapt - doesn't necessarily make sense having several billion people doing something that a computer can do quicker, more accurately and more easily, just to keep people in employment (doing something that a lot of people hate).


> Pushing for AI output to be banned is not a long term approach IMO unless all countries do it in unison - countries who do not ban AI output will be much more competitive, and those that ban it will be left in the dust. Any country that banned computers in 1979 would have really hurt themselves.

this argument is extremely similar to that which was used against proposals to ban slavery


What a bizarre comparison.

Automating jobs does not have the same moral implications as slavery.

That might be a valid argument for keeping slavery if it wasn’t entirely immoral.

But I suspect AI isn’t like the discovery of slavery - it’s more like the discovery of electricity which put candlemakers out of business.


it's the same argument

"if we don't have child labour/slavery/no workplace rights/lax environment law/planning restrictions/data privacy law/... then we'll be outcompeted by those that don't"

unregulated AI fits in there perfectly

these persisted for so long because of economics over ethics... maybe we'll make the correct decision this time

(separately: if we do get to AGI then the slavery comparison suddenly gains an extra dimension... as that's literally what its operators will be doing)


I agree the last/separate point has interesting/complex ethical implications.

But in terms of the first point - the ethics of forcing someone into labour are just not the same as automating peoples work.

Although some people do genuinely enjoy their work, most people work because they are forced to (pay for shelter, food and a good life for their kids). Not forced in the same way as slavery, but there is still mostly a requirement to work in modern society.

It’s one thing asking people to do tasks that are valuable to society, but asking people to do jobs they hate which could be easily automated in the future just for the sake of them “being in a job” seems pointless as a society level to me. Why would we have people doing things they hate that don’t really deliver ‘real’ value because we could automate it and have them do something else? (Preferably something they hate less).


History has shown us that they will not react until they are already destitute.



Would it be possible to have a transcription of the videos? I have never been able to follow a video for learning, for me the video format if like someone else controlling the way I learn.I much prefer to be able to design my own schedule in text. By using text I am able to skim quickly and select only those parts I consider important. Maybe other people are able to do that with video but I am not in that group.


Yes every lesson has a complete transcript written by our wonderful community members. You can access through the usual YouTube interface.


Is there any chance that the transcripts can be published on the website? I'd love to be able to search through them. love the course btw!


In youtube click "show transcript". It's searchable. Click in it to jump to that part of the video.


This would be ideal. I much prefer seeing everything in text form, in the same place, without having to jump out to YouTube.

I can read many times faster than listen, and more easily skim back if I realise I don't understand something.


The full transcripts are available here in plain text form:

https://github.com/fastai/course22p2/tree/master/summaries


Yet another application for ai (Is there an acronym for that?). Given a video, generate a nicely readable webpage with transcript and tastefully selected stills and short clips.

So many tantalizing possibilities for next-gen A/V blending text (toc, transcript, summary, elaboration, explanation, interaction) with massaged video (speed, excerpts, mashups, synthesis) and HID ("hmm, my viewer just frowned and shifted gaze to the text pane, so let's cut to a synthesized reprise of that last bit with the conceptual steps filled in a bit"). So much fun to be had, modulo patents.


Good luck with the code.


I've watched all of the fastai tutorials. I've screwed up by not going through and actually writing the code, but I hope to do it again and do it right this time. My biggest question is if this the whole part 2 course or still the 2-3 preview courses released around last November?


No this is now the whole part 2 course, so lesson 9 - 25.


The fast.ai courses are exceptionally good for getting started and learning the concepts, but, once you've mastered an intuitive understanding of the techniques, I highly encourage people to dig deeper into the concepts - specifically, reverse SDEs, stochastic calculus, optimization, score matching.

Jeremy does an excellent job making the content approachable, but, if you want to go beyond being an excellent practicioner, you cannot skip the theory bits.


FYI this course does cover a lot of intuitive understanding of techniques, but it also does cover this theory. Everything is done from the foundation level, and all of the stuff you mentioned is taught and implemented from scratch as part of the course.

Of course there's loads more to do on gaining intuition, and loads more theory to be learned once the course is done! Only so much can fit in a single course :)


my nephew who is going to college this fall at east coast said, "what's the point of CS major anymore" and i asked "what do you mean" and he goes " ChatGPT". This is very cynical coming from 16 year old; however, got me thinking about impact of Chatgpt on peoples' mind about learning ML/AI if chatgpt can do it all!


The year is 1900. College-bound nephew says "what's the point of a physics major anymore?" I asked "what do you mean?" and he goes "Newton's Laws".


In my opinion you still need to have a grasp of the concepts, because ChatGPT still hallucinates stuff and messes stuff up.


This looks awesome, however I probably need to do a bootcamp with real people to compete and have comradery with to actually retrain on the basics in this.

It doesn't matter where in the world this is (I like to travel), but does anyone have a recommendation for in person bootcamps on AI?


If you're interested in taking this course, you can check the forums and/or Discord where people will organize study groups going through the course. Some of the study groups organized are in-person, some are virtual. These are usually great opportunities to study with some real people at the same time and get that camaraderie.

https://forums.fast.ai https://discord.gg/YHtEBwzV


Fantastic, I’ll sign up and have a look! Thanks


Universities are often open to sit in classes, you can act it you were an enrolled student.


fast.ai is hands down the most beautiful learning material for deep learning / ai


Why would I want to learn fast.ai...

I feel that while this course is surely superb, I don't want to invest time in a new framwework. What's your opinion?

We've got lightning, fast.ai, and another one that I forgot about... why?


This course doesn't use the fastai framework.

But to answer your question -- the reason IMO to learn the fastai framework (or indeed any software which incorporates a significant amount of novel research) would be to learn about the new ideas that it introduces, in order to become a more effective and informed practitioner. And if you find you like it, you may even decide to use it.


Did you create this account just to hate on the course? xd


Love all his courses!


This is awesome!

I loved the previous FastAI courses, and the top-down approach taken in them is really fun to go along with.

I guess now it's time for an LLM-themed course?


How much math does the reader need under their belt? Is high-school math sufficient?


Yes high school math is sufficient, as long as you did basic calculus (derivatives and the chain rule) and probability. We cover stuff that's well beyond high school math, but it's introduced when it's used, and we link to external resources for more details as needed.


> as long as you did basic calculus (derivatives and the chain rule) and probability

How much additional effort would it require to incorporate these topics into your course, potentially as supplementary content? This could make the course more accessible to students from countries with less comprehensive educational systems.


I just started part 1 and had no idea part 2 was coming out. This looks great


really love these hands-on tutorials, d2l and fast.ai has helped me a lot!


I avoided them both because learning pytorch, and then those two frameworks, seemed a crazy idea.


The course does not use the framework


Is there a stablediffusion.cpp port like whisper.cpp and llama.cpp?


Jeremy and Fast.AI should get bloody medals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: