
Multi-Task Learning in the Wilderness: Karpathy on Autopilot progress [video] - kjksf
https://slideslive.com/38917690/multitask-learning-in-the-wilderness
======
chronolitus
As someone who works on ML, in particular 'deep neural networks', I am always
surprised when someone seriously mentions using them to solve complex
(anything with humans in the loop fits solidly within this category) real-
world (i.e. real-world inputs and outputs) problems. I am very surprised when
the problem definition additionally involves potential injury and death.

And yet it seems that Karpathy firmly believes that Neural-network-based self-
driving systems are not only reasonable, but within reach using something
close to today's algorithm.

This always leaves me wondering whether (i) I am somehow using an entirely
different class of algorithms from what these guys are using (ii) the AI/ML
bubble has even the top minds in our field drinking a little too much kool-aid
(iii) My colleagues and I are just not good enough at ML to unlock it's full
magical potential.

(To be more exact, I am not debating the truly impressive power of today's
ML/deepNN algorithms when it comes to statistical inference. Yet, does anyone
here believe that you can get to Level 5 on today's roads without something
close to general-purpose intelligence? How do you react to written roadsigns
explaining temporary route alterations? How do you know to take special
precautions in situation where context indicates the likelihood of unmodeled
irrational behavior?)

I am genuinely curious, who here believes/doesn't believe today's AI can
really generalize that far when (i) adversarial attacks are still a real issue
[1] (ii) random seeds can sometimes have a greater influence than model
architectures [2] (iii) we are not even sure how well these models approximate
learning which takes place in biological brains [3]

[1] [https://sites.google.com/view/lidar-
adv](https://sites.google.com/view/lidar-adv)

[2] "We do not find any evidence that the considered models can be used to
reliably learn disentangled representations in an unsupervised manner as
random seeds and hyperparameters seem to matter more than the model choice."
\-
[https://arxiv.org/pdf/1811.12359.pdf](https://arxiv.org/pdf/1811.12359.pdf)

[3] [https://www.cell.com/trends/cognitive-
sciences/fulltext/S136...](https://www.cell.com/trends/cognitive-
sciences/fulltext/S1364-6613\(19\)30012-9)

~~~
FHorse
I also work in ML/DL. I'd use a different word than "surprised". "Appalled" is
more appropriate. If the video proves anything, it's that everything they do
is a hack. There is no principled way to do anything. There is something
seriously wrong when they have to resort to oversampling to deal with
catastrophic forgetting, aka, learning instability. Yet I'm supposed to trust
my life with this thing? Today's DNN are still incredibly crude. There are
fundamental problems that need to be solved before we can deploy them in
anything other than toys.

