
A Meta Lesson - bibyte
http://andy.kitchen/a-meta-lesson.html
======
7373737373
What confuses me about the state of the art of machine learning is that at
both training and execution time, there is no notion of system resources.

These systems _cannot_ runtime-optimize themselves like the human brain,
because attention, boredom, frustration and other mechanisms cannot arise
'naturally' without such a notion.

Also, if the system can not infer its own boundaries (by feeding its output
into itself, at least indirectly), it can not develop a notion of self and
engage in meta-learning.

Given that the brain is a 20W system evolved towards learning things over a
long time span in a changing environment, I'd really like to see an energy
consumption comparison with current neural nets. With AutoML it may be
necessary to separate network construction, feature learning and inference.

Is there a Moores law equivalent here? Maybe launch a competition: "You are
given ten 10 Watt-years, create the best system for this task (e.g. image
segmentation)". With current electricity prices, training a system equivalent
to a 10-year old brain will cost around $100 (not including I/O and training
environment).

------
anonytrary
> Rodney Brooks (approx.): No, human ingenuity is actually responsible for
> progress in AI. We can’t just solve problems by throwing more compute at
> them.

I find this amusing considering nature just kept throwing more time at the
universe until humans emerged. That said, I don't think Brooks is wrong. I
can't recall where in the lectures this is, but I remember Feynman going on a
rant about how poorly designed the human eye was. Human ingenuity really is an
essential piece of the puzzle, seeing as Nature, in and of itself, isn't very
smart.

~~~
taneq
You kind of have to add an implicit "unless we want to use a computer the size
of a planet and wait two billion years for the answer".

~~~
Jedi72
42?

~~~
taneq
I'm not saying that's why the Earth is here, but have you noticed that almost
every new medical advance does something beneficial _for mice_?

------
vicpara
The nice thing about philosophical views is that they don't really matter.
Whatever gets the ball rolling in beating the SotA we'll accept it.

In science we shouldn't have to abide by philosophical views. They only hurt.
Theorems, proofs and the empirical evidence should fix it. Can you believe
there was a time when the world was against Yann LeCun because everyone
speculated neural networks don't work for complex scenarios?

Both points of views Sutton's, Brookes' are utter speculations as we cannot
generalize or learn anything meaningful from them. Both of them are saying:
"Since I don't know for sure how we're going to improve the AI in the long run
therefor he's what I suspect the right approach is". They are even looking at
the same history of AI is seeing different things. Go figure now.

In mathematics old tricks only get you so far. The hard problems in a
particular moment in time can only be overcome by deploying new tricks. Why
wouldn't that be the case in the AI space?

Hard coded rules, got us started. Then more computation made the next step.
Then we mixed the two. Then we created more purposefully handcrafted
architectures such as CNN. Then we manually annotated millions of data points.
Then GANs came along to fix some stability issues.

What's the next trick now? Don't worry, since no one knows both authors are
just speculating.

------
gtr32x
Reading both of the posts make me believe that Sutton has stated a more global
outlook in the progression of complexity than Brooks did, or that Brooks is
simply trying to continue to encourage the current generation of AI research.

My naive take of each of their arguments, which are seemingly obvious but
nonetheless profound:

Sutton: advancement in computation capacity > specifically devised methods

Brooks: building specific tools help in solving the problem

You see, neither of them are wrong. However, what Brooks is arguing for is
essentially - hey, we invented paper, but we have no computer yet, let's make
some line paper and graph paper to increase our productivity, hooray! Then
what Sutton is saying is, dude, show me how your method will continue to be
productive when computers are invented.

I do also want to propose my takeaway from these pieces though. From Brooks I
take that building tools/methods is essential to local optimization and
tools/methods can be extended to fit new global advancements. And to Sutton's
point, we are in a state of ever progression by the extension of the essence
of Moore's Law.

------
mannykannot
Sutton's position might be described as Darwinian (our brains evolved with
nothing but natural selection guiding the process), while Brook's position
might be called Chomskyan (we are born with a rich set of rules in place.)
These are not, of course, mutually exclusive.

------
marmaduke
Perhaps a more meta lesson is that a single viewpoint is rarely sufficient.

I’d also be curious to see how hard researchers like these push their
differences if publication was anonymous or like code, contributors are
acknowledged but we refer to the project name, Linux not Torvalds et al

