
Fast.ai Part 2: Deep Learning from the Foundations - alohia
https://course.fast.ai/part2
======
jph00
Nice to see this on the front page! :D There's some more info about the Swift
lessons on the TensorFlow blog here:
[https://medium.com/@tensorflow/3ee7dfb68387](https://medium.com/@tensorflow/3ee7dfb68387)

Happy to answer any questions you all might have.

~~~
epiphanitus
Been starting your DL course and it's amazing. I love how your approach is
both ambitious and practical at the same time.

You mentioned you studied philosophy as an undergrad. How did you end up
working with analytics and DL?

(I know this is an old thread but I couldn't resist)

------
yeldarb
I had the pleasure of participating in the synchronous live-stream of the
course and loved it.

Looking forward to the release of v2 of the library that was prototyped during
the course!

Roadmap:
[https://forums.fast.ai/t/fastai-v2-roadmap/46661](https://forums.fast.ai/t/fastai-v2-roadmap/46661)

------
sanchezdev
I'm excited to see a couple lectures with Swift. All the work to add
interoperability with Python and make swift-jupyter is very appreciated and
feels like it's Xmas in June.

~~~
codesushi42
Why would you need Swift in Jupyter?

Just implement your model's architecture in Python. Export it. Invoke it at
runtime via Swift and TF.

~~~
erikgaas
You can but it's not ideal. What if you want to write your own cuda kernel for
your experiment? Python isn't really setup for this easily unless you want to
throw odd c++ integrations into your code. Swift is designed to be a direct
match to the underlying instructions. This would make deep learning much more
expressive and flexible in Swift, with less errors.

This question is addressed extensively in the course. Check out the last two
lectures, they do a great job of going over lots of different reasons.

~~~
codesushi42
That doesn't really make sense. The code you implement in Swift when in
Jupyter needs to also be available at runtime to execute. Meaning you can do
the exact same thing in Python, because your model architecture is going to be
embedded in the exported model.

For custom kernel code, what you really want to use is a custom TF op. But I
doubt that's what you're getting at anyway, because that's for more advanced
use cases.

~~~
jph00
The goal is to allow Swift to be used for writing MLIR and XLA kernels. The
new LazyTensor under development already allows for fused XLA operations to be
created in Swift. There's an awful lot you can do in Swift which is very very
hard to do properly in Python. I've got a bit more background on this here:
[https://www.fast.ai/2019/03/06/fastai-
swift/](https://www.fast.ai/2019/03/06/fastai-swift/)

 _Edit: HN isn 't letting me reply deeper, so I'll reply to "what are the
benefits over C++?" here. The first is that MLIR has dialects that support
stuff like polyhedral compilation, which result in much more concise and
understandable code, which is often faster too. The second is that using the
same language from top to bottom means you can profile/debug/etc your code in
one place, which is much more efficient. And you don't have to learn two
languages. And you don't have to use C++, which (for me at least) is a big
win!_ ;)

~~~
codesushi42
Thanks. What is the advantage of using Swift over implementing a custom TF op
in C++, and using its generated Python wrapper in Jupyter? Just not having to
deal with C++?

~~~
saynay
Not needing to know both Python and C++.

If you need to debug something, you don't need to use some mixture of pdb and
gdb.

Swift is a relatively young language, meaning it does not (yet) have weird
hairy bits to work around design decisions made 20 years ago.

Similarly, Swift is still getting defined in many areas. There is
(theoretically) the opportunity to influence language design decisions to
patterns that mesh better with ML needs.

This is largely my paraphrasing of the reasons stated by Jeremy
[https://www.fast.ai/2019/03/06/fastai-
swift/](https://www.fast.ai/2019/03/06/fastai-swift/)

~~~
codesushi42
Yes, but at the cost of portability. You won't be able to run the same model
on Android for instance.

You're still debugging in two places; in Swift and Python. Debugging Swift is
probably easier than C++ though.

I think using Swift is a valid solution for special cases, but not the best
solution for most cases. The TF authors already provide a suitable, general
solution in the form of custom TF ops.

And if you don't need a custom kernel, and the chances are you don't, then
stick with pure Python for maximum ease of use and portability.

------
rubyfox
Even if you’re not new to DL, I recommend a watch of these - there’s always
little bits and pieces in the course which are great little tips, and the part
mid-way through where an issue is found with PyTorch layer initializations is
v. impressive.

------
enraged_camel
I've been meaning to learn ML/DL.

My problem is that I can't think of any use cases for it, either in my
personal life or work life.

I understand that the technologies have a lot of potential, and are currently
used in many major projects and endeavors. I just keep drawing a blank when
trying to answer the question, "what would I do with this?"

If I am someone who is goal-driven (rather than, say, learning something for
the sake of learning), how can I motivate myself to pick this up?

~~~
jph00
You might find it helpful to see what other people are doing with deep
learning - see if any of it seems relevant to your interests. For instance,
here's some examples of folks from diverse backgrounds using DL in their
domains of interest: [https://www.fast.ai/2019/02/21/dl-
projects/](https://www.fast.ai/2019/02/21/dl-projects/) . Or here's a rather
deep rabbit hole - hundreds of replies from people showing their learning
projects: [https://forums.fast.ai/t/share-your-work-
here/27676](https://forums.fast.ai/t/share-your-work-here/27676) .

I find it useful to think of DL as just another way to get computers to do
what you want. Rather than focusing on control flow and setting/reading
variables, you focus on providing examples to learn from. Both approaches can
do many of the same things, but each has areas that they're better at. Eg DL
is better for things that are hard to explain just how you do them (e.g.
seeing pictures, hearing sounds, reading text) and traditional coding is
better for things that need specific logical steps. A combination of the two
is often best for solving end-to-end problems in practice.

------
lelima
I've never used swift before, is much of a difference from python? I like
fastai cause was made with python logic.

In the other hand, I'm very exited to try the audio features

~~~
oflannabhra
Swift is quite different from Python.

------
dpflan
Can someone who has completed Fast.ai courses comment on the experience and
then how you used the newly gained knowledge?

~~~
phy6
I started using the techniques immediately, for applicable deep learning and
NLP projects. There's no need to finish the entire course to get some
actionable rapid prototyping done. Each lecture is nearly self contained once
you're comfortable with the basics.

~~~
dpflan
Thanks! Was this for work or personal use?

~~~
phy6
Both, as most of my projects are aligned with my personal startup, and other
projects are for a salaried startup.

------
jray
Great, thank you so much, Jeremy. One question, will a new version of the
Machine learning course be made?

~~~
jph00
Not in the short/medium term, at least. The stuff we covered in that Intro to
ML course hasn't really changed, so the current course is nearly equally
relevant today. (We used the fastai library a little for some basic utility
functions, and that bit has changed, but that's not a very important detail.)

------
pushpendra7
Finally, it's here

------
tectonic
I did fast.ai last year and it was fantastic.

