
Neural network AI is simple. So… Stop pretending you are a genius - ghosthamlet
https://www.kdnuggets.com/2018/02/neural-network-ai-simple-genius.html
======
throwaway2016a
If they are so simple why do they take 4 months of 15 hours a week on an
online course to learn? Or, why do I (who did pretty well in my AI class in my
Computer Science program 10 years ago) have such a hard time with it?

The way I see it there are two options based on this article:

1\. Either I am stupid - I hope not :(

2\. The people who teach these courses are terrible at teaching

Third option:

3\. This article is vastly oversimplifying

I'm actually leaning towards a mix of #2 and #3 but I'm biased against #1.

~~~
mcintyre1994
Which course are you doing? I've been doing fast.ai and can't recommend it
enough, it's extremely practical and based on an outstanding library built on
PyTorch. The person who teaches it is exceptional, so if #2 was your problem
then you won't need to worry about that.

------
taeric
This was more fun than I expected. Loop the loop is now officially stuck in my
head.

I do think it is more harsh than needs be. Sometimes, the little
accomplishments aren't for others. Essentially many people use public journals
nowadays.

Though ultimately, I think that logic applies here, too. Specifically for this
rant.

~~~
flossball
It is a solid rant in that there are many people at there getting paid well
(or selling) dubious models that will never work outside a narrow training
set. Their bosses or customers will be none the wiser, but the results are
dangerous.

I witnessed similar problems in search where data retrieval was not
guaranteed. Often systems were fragile and needed to be closely monitored for
document feeding problems and index health. Some companies _cough_ (autonomy)
_cough_ were selling complete shit for insane prices for critical applications
like legal doc searches. Many companies keep in business by easily
snowed/biased/bribed tech journals and paying off client execs with strippers,
parties, etc.

BTW, search hasn't changed much and it is still dangerous to just assume it
works for things as 'simple' as ELK.

------
dlwdlw
I'd say a fairly large portion of people getting involved in neural networks
do so out of a desire to be better than others. A domination mindset rather
than a curiosity mindset. Another large group is the pragmatic mindset since
there's so much money.

The domination mindset leads to arrogance and in my opinion a chain a
dehumanization that results in an inhuman product because it is not rooted in
happiness.

The chain starts with the dehumanization of human graders. Incredibly complex
and amazing neural networks that are paid minimal wage to improve much worse
ones. The developers and managers above them feeling snug in their much higher
salaries (and implicitly higher ability and worth). Not realizing that they
are viewed the same way by many deep learning folks.

------
ageitgey
This is a very poor article. It manages to both be demeaning to people who
want to learn something new while also being riddled with factual errors.

Here are just a few factual errors:

1\. The 11-line example is the simplest possible neural network with a very
basic implementation of gradient descent in a high-level vectorization
framework (numpy). It's exactly what you'd cover in the first week of a course
on neural nets. But obviously there are many things to learn beyond that
before you can solve real problems effectively with neural nets. In fact, the
example code is 100% lifted from Hacker News user williamtrask's great article
[1] which ironically closes with a list of 10 topics to investigate next to
start learning the additional things you'd need to make your neural net useful
for real problems.

2\. This section:

> '“Deep Learning” and n-Layers of depth is just a neural network that runs
> its output through itself. This is called Recursive Neural Networks (RNN),
> because you loop the loop.'

..is full of errors. RNNs are Recurrent, not Recursive. They have nothing to
do with having more layers, but with having a memory of previous states. And
in deep learning, having more layers is often a strategy to reduce the
dimensionality of the input and learn higher-level representations of data by
using a variety of tricks/layer types/connections. It's not just "looping the
loop".

3\. The whole section about not initializing the random state and thus getting
different results on a Desktop vs. a mobile phone shows a total
misunderstanding of how things work. You aren't typically re-training a neural
network on a phone, you are using the pre-trained weights from a different
machine to make inferences. That is typically 100% deterministic.

4\. The description of TensorFlow as a wrapper to provide visualization tools
is totally wrong. It's a framework for running mathematical operations defined
as a graph data structure. That makes it possible to do cool things like split
up the work across multiple processors efficiently and it provides pre-made
operations that are useful for building machine learning models more complex
than a single layer neural network.

The only area where the author is really correct is that there's a lot of
misunderstanding of neural networks as a magical solution to problems when
they are really just fancy statistical models that are subject to learning
relationships that aren't there.

Yes, the word of AI/ML is full of hype and bad reporting. But there are great
ways to talk about the limitations of ML and it isn't by belittling people.
Here's an example of a great article: [2]

[1] [https://iamtrask.github.io/2015/07/12/basic-python-
network/](https://iamtrask.github.io/2015/07/12/basic-python-network/) [2]
[https://www.alexirpan.com/2018/02/14/rl-
hard.html](https://www.alexirpan.com/2018/02/14/rl-hard.html)

------
IshKebab
Yeah... the result may be relatively simple, but getting there isn't.

