
Practical Data Ethics - tosh
http://ethics.fast.ai
======
mark_l_watson
Fast.ai really is a one stop shop for all things related to being a deep
learning practitioner.

I used to manage a deep learning team, concentrated on using TensorFlow
properly, etc. I am all but retired now so I am making the switch to using the
fastai libraries and materials. Since I am now a “gentleman scientist” (being
retired) I still enjoy projects and some research, but I am also cutting out
all unnecessary time wasting activities to concentrate on things that most
fascinate me.

Off topic, but I think this is good general advice: try to not follow the
crowd, but rather, go for what most resonates with who you are, where you want
to steer your career, etc.

~~~
NetOpWibby
Thank you for your insight, I appreciate it.

------
amrrs
This is a very important topic given how mainstream AI and ML solutions are
getting. Also that a lot of Bootcamps are just churning out high potent Data
Scientists, This is an essential area of discussion and everyone working in
Tech should be aware of! I'm glad and thankful to Jeremy and Rachel for their
work on this and fastai in general.

~~~
SQueeeeeL
I tend to think the opposite trend is occuring, for every IBM who gets out of
human recognition, you have 10 ClearView AI's jumping at the chance to get any
contract they can. Without meaningful regulation, nothing will change;
expecting people to quit their jobs (and ability to provide for their partner
and children) because of an ethics video series seems like a hollow solution

~~~
jph00
> _expecting people to quit their jobs (and ability to provide for their
> partner and children) because of an ethics video series seems like a hollow
> solution_

Why are you making straw man arguments? The person you're replying to didn't
state or imply such an expectation, not did the OP.

In fact, meaningful regulation is discussed at length in the course. People
involved in technology policy are one of the audiences that it's designed for.

~~~
SQueeeeeL
I think the idea of this being an "area of discussion" is disingenuous, as
those who consume the course (employees/creators of the software), have very
little control over the ethics of AI. There will always be another coder who
just graduated and looking to make money. I just disagree on the premise that
having some engineers learn ethics can meaningfully change the state of things

~~~
dpflan
Is there a quantity/critical mass of engineers who learned ethics that can
meaningfully change the state of things?

If not, who can change the state of things?

~~~
SQueeeeeL
The people funding massive amounts of development? That was my point in
relation to ClearviewAI, as long as we allow bad actors, we will have a
negative state of things. There will always be someone else out there to take
new contracts because money talks. If these projects were illegal,
corporations would avoid them.

The way to change the state of things would be to "write your congressman" (I
really enjoy Sorry To Bother You's take on this idea). Basically we're fucked
in terms of expecting ethical uses of AI

------
cjhveal
I found this video, (linked as required reading for lesson 2), really
interesting:

> 21 fairness definitions and their politics

[https://www.youtube.com/watch?v=jIXIuYdnyyk](https://www.youtube.com/watch?v=jIXIuYdnyyk)

It feels accessible to people with stats 101 under their belt, and pays a lot
of attention to the human factors involved in these problems.

~~~
jph00
Arvind Narayanan, who did that video, is amazing. It's astonishing how he's
achieved such expertise in computer science, machine learning, cryptocurrency,
and technology policy.
[https://www.cs.princeton.edu/~arvindn/](https://www.cs.princeton.edu/~arvindn/)

------
otoburb
Here's the announcement of the new course this submission links to:
[https://www.fast.ai/2020/08/19/data-
ethics/](https://www.fast.ai/2020/08/19/data-ethics/)

~~~
jph00
...and this is the syllabus:
[http://ethics.fast.ai/syllabus/](http://ethics.fast.ai/syllabus/)

------
site-packages1
I do like fast.ai for a lot of things. Their video production is very, very
lacking though. I’ve tried to watch some of their DL courses and really
they’re about as good as some stogy old non-tech savvy professor attempting to
make an instructional video. Lots of echo, sound coming in an out, basically
filmed from the back of an auditorium, etc. I can’t comment as much on the
content as it was hard to get through. Some MIT OCW courses are similar. But
from the fast ai content I’ve read, it’s very good.

An example of well done video is an Andrew Ng Coursers course. Great sound,
great pacing, easy to follow like you’re on a zoom call with him directly,
etc.

~~~
jph00
That's all fixed for this year's courses. (This is the first of this year's
courses to be released.)

------
momokoko
I always find it fascinating that immoral cultures invented “ethics” because
they created economic systems that were inherently immoral and needed a way to
rationalize the hurtful actions the system they created now requires them to
take.

~~~
Areading314
What defines an "immoral culture"? Wouldn't defining that concept require a
notion of ethics?

~~~
TeMPOraL
Arguably every culture is immoral to some degree; you can't reach perfection.

However, GP has a point: people follow incentives like water flows downhill.
Most talk about ethics exists to highlight situations where economic
incentives make people do immoral things.

------
overeater
Could who watched it give a review of what it covers?

There's a lot of ethics lectures that follow a formula of 1) show (usually
slightly misleading) news stories of bad outcomes from automation / AI like
the Target diaper story, 2) say that these things are a problem and we need to
think more when designing this software.

I was wondering which of these videos cover more concrete guidance or
solutions, so I can focus my viewing.

~~~
jph00
You can see the syllabus:
[http://ethics.fast.ai/syllabus/](http://ethics.fast.ai/syllabus/) .

There's a free book chapter that covers a subset of the material, so you might
that a more focused approach if that's what you're looking for:
[https://github.com/fastai/fastbook/blob/master/03_ethics.ipy...](https://github.com/fastai/fastbook/blob/master/03_ethics.ipynb)

------
TheAdamAndChe
Fastai is awesome. I had no experience with deep learning a couple weeks ago,
but using their resources, I made cryptocurrency prediction software that I am
testing now.

I'm definitely not a quant, but I could build this quickly. Because of this, I
think deep learning will truly be commoditized in a short time.

------
ekianjo
> Lesson 6: Algorithmic Colonialism

What a choice of words.

~~~
totetsu
Why is that? > "When corporations from one country develop and deploy
technology in many other countries, extracting data and profits, often with
little awareness of local cultural issues, a number of ethical issues can
arise"

Seems like it is drawing comparison to power dynamics, and resource flows of
historical Colonial systems, and practices for handling data, and providing
Algorithmic services in countries with a colonial history.

~~~
ekianjo
Because colonialism is about a mindset in the first place. Here, you assign a
mindset where it might not exist. Which is ridiculous.

