Hacker News new | past | comments | ask | show | jobs | submit login

Exactly, the people who are derisive of those who consider ML models to exhibit glimmers of true intelligence because it's only matrix multiplications always amuse me. It's like they don't even realize the contradiction in holding the position that seemingly complex and intelligent outward behaviour should not be used as an indication of actual complexity and intelligence.



If you study this historically you will see that every generation thinks they have the mechanism to explain brains (gear systems, analog control/cybernetics, perceptrons).

My conclusion is we tend to overestimate our understanding and the power of our inventions.


The difference is that we now actually have a proof of computational power and computational universality.


Analog circuits have the same computational power. Piecewise linear functions have the same computational universality.


Except we didn't know any of that, nor did know how to construct physical analogs in order to achieve universal computation. At best, we had limited task-specific computation, like clocks and planetary motion.


We knew about universal function approximators like. Polynomials and trig functions since the 1700s. Turing and godel were around 1910 and 1920. The cybernetics movement is big in the 30s and 40s. Perceptrons 50s and 60s


Taylor expansions for all functions do not exist. Furthermore, our characterization of infinity was still poor, so we didn't even have a solid notion of what it would mean for a formalism to be able to compute all computable functions. The notion of a universal computer arguably didn't exist until Babbage.

I stand by my position that having a mathematical proof of computational universality is a significant difference that separates today from all prior eras that sought to understand the brain through contemporaneous technology.


> Taylor expansions

That’s not what I’m talking about. This is a basic analysis topic:

https://en.m.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_th...

At least mid 1800s for a proof. 1700s also explored Fourier series.

> stand by my position

And you’re still ignoring the cybernetics, and perceptrons movement I keep referring to which was more than 100 years ago, and informed by Turing.


> That’s not what I’m talking about. This is a basic analysis topic:

It's the same basic flaw: requiring continuous functions. Not all functions are continuous, therefore this is not sufficient.

> And you’re still ignoring the cybernetics, and perceptrons movement I keep referring to which was more than 100 years ago, and informed by Turing.

What about them? As long as they're universal, they can all simulate brains. Anything after Church and Turing is just window dressing. Notice how none of these new ideas claimed to change what could in principle be computed, only how much easier or more natural this paradigm might be for simulating or creating brains.


This implies it works piecewise. That’s also true of neural nets lol. You have to keep adding more neurons to get the granularity of whatever your discontinuities are.

It’s also a different reason than Taylor series which uses differentiability.

You do not understand this subject. Please read before repeating this: https://en.m.wikipedia.org/wiki/Universal_approximation_theo...

> what about them

Then you seem to have lost the subject of the thread.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: