But seriously, do you have a favorite example where the Laplace transform is used to prove a theorem or used in practice to solve a problem?
I'm familiar with the undergraduate differential equations examples. But there are plenty of things taught at the undergraduate level which are tractable and helpful to build intuition but either a) aren't important from a research perspective or b) aren't used in practice. The Fourier transform has both.
All the time in AC circuits. Especially for anything RF-related. It's vastly, vastly easier to work in the (complex) frequency domain. Antenna and filter design are pretty much all done in the s-domain.
Atom is my editor of choice due to the Platform.IO extension for IOT development. On my machine, just starting Atom 1.13 involved two interruptions by message boxes saying "Editor is not responding" which had to be closed manually in order to continue with the start-up process. Those annoying boxes have vanished completely with 1.14., so there is some significant performance improvement.
If page 33 depicts the working of the brain on a very high level, the world model (or simulator) residing inside the agent must contain a model/simulator of the agent itself.
Could this give rise to self perception or consiousness?
The idea that self-representations give rise to consciousness exists in neurophilosopy and it is closely related to the representational theory of consciousness and the higher-order monitoring theory:
I happened to be reading The Selfish Gene by Richard Dawkins and Gödel, Escher, Bach by Douglas Hofstadter at the same time, and both of them point at exactly this being the reason for consciousness. I was stunning at how both reached the same conclusion, that consciousness arrises from recursion of self perception, from very different points.
Also, if anyone is watching Westworld (spoilers), it seems to come to the same conclusion funnily enough. What finally gives the androids consciousness is some kind of recursive idea of listening to themselves.
Re Westworld: The theory of consciousness explored in the show is explored in more detail in Jaynes' The Origin of Consciousness in the Breakdown of the Bicameral Mind, as alluded to both in the show and in the title of the final episode. I've just picked it up, and it's a pretty interesting read so far. I've also noticed that a lot of little details from the book used in the show, such as referring to memories as "reveries" at points, and talking of minds as "hosts" of consciousness. I may need to re-watch the show after finishing the book!
Michael Crichton, who wrote and directed the original Westworld (also of Jurassic Park fame), describes the same idea in his novel Prey[0], which is about emergent AI from swarms of self-replicating nano-bots.
During the speech [1], Yann (surprisingly) didn't mention consciousness at all. The focus of this segment was the need to "imagine" the future. The premise is that "common sense" – Yann's big theme of the talk – is about "filling in the gaps" of incomplete information. We fill in the gaps by imagining the future.
So consciousness was not raised at this point. But that doesn't mean that it couldn't be an emergent property.
I wouldn't assume it must, although it would be neat if it did. There are many ways to get agents to react to situations without self-awareness. In fact, the agents themselves can be decomposed into many otherwise incompatible sub-agents. See Minsky's Society of Mind.
I don't understand this argument. (It keeps coming up.) The two issues I have with this are:
a) Self-perception does not seem like consciousness to me at all. In meditation, if done properly, there is very little self left. It feels more like pure awareness. It is almost the opposite of the model of the self that the brain constructs.
b) I fail to see how the fact that a mechanism refers to itself should somehow give rise to the feeling of conciousness. Why would it? Nobody would predict consciousness from that if we would not already know it exists and it's easy to imagine a device that has a model of itself and is not self-aware.
I understand the need to somehow fit this into our scientific framework, and the idea that "consciousness is just what it feels to have a brain" is the best thing we have, but I don't think it explains anything. There is something we are missing.
Answers to "can we give consciousness to AI" is heavily dependent on how you define consciousness in the first place.
Many definitions can coexist, some more actionable than others. "Being aware of the existence of oneself in the world, and being able to reflect on oneself's decision" seems relatively practical. So, Self-perception + self-reflection = consciouness (as a definition)
From this starting point, it seems reasonable to derive that consciousness can arise from 1) mental representation of the world that include oneself 2) empathy for others (I can guess why this other worker has taken this decision) that, once applied to the actions of the self as if it were an external agent, gives self-reflection.