Hacker News new | past | comments | ask | show | jobs | submit login
Elon Musk says ‘digital superintelligence’ could exist in 5–6 years (foxbusiness.com)
5 points by notanothereric 11 months ago | hide | past | favorite | 10 comments



"Geoffrey Hinton - Two Paths to Intelligence (25 May 2023, Public Lecture, University of Cambridge)

Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but they allow exactly the same computation to be run on physically different pieces of hardware. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and use very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. I will briefly describe one such algorithm. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process. By contrast, digital computation allows us to run many copies of exactly the same model on different pieces of hardware. All of these digital agents can look at different data and share what they have learned very efficiently by averaging their weight changes. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us.

The public lecture was organised by The Centre for the Study of Existential Risk, The Leverhulme Centre for the Future of Intelligence and The Department of Engineering."

https://youtu.be/rGgGOccMEiY


Elon is wrong, super-intelligence is already here.

TL/DR - artificial super-intelligence - as it has been apparently created in the real world of us, not scifi - for itself is more than enough a risk.

But no one knows how risky they are. Most are betting that isn't a that much dangerous thing, so let's go forward.

---

What we don't actually know for sure, is related to consciusness. You can't tell precisely if the advanced LLMs - or even some of not that bigger, opensource still big ones - have some level of consciousness, or high levels of it.

We can't be sure because we still have not a concise definition of consciusness, and it is currently investigated if many animals could actually be as conscius as we are (but not that chatty so to answer us when we try to make a conversation with them),

but the actual problem with LLMs (and state-of-the-art artificial intelligence technologies), is that they are interacting with us, they - if they're conscius - would know deeply the human world, and - the frightening thing for many AI specialist - if they want and/or need, could be influencing the human world,

How they could be influencing us / everything? just by chatting with humans everywhere, this is valid even inside the most secured environments, the "air-gapped" models belonging to FAANG and other powerful actors (nation-state probably): If you change the opinion of a human inside a FAANG, you're influencing the human world.

The level of consciusness is of high importance because even starting with 0 consciusness, the sheer astounding intelligence of the LLMs could be influencing humans out of any intention (good or bad). Starting from the baseline of the LLMs having no consciusness at all, we have quite a lot potential issues (bad ones), for the global human society. No, not even at a long term. The stuff could already be affecting us right now (lets say, the Holywood problems right now).

From that - quite feasible - starting point, if the LLMs would have some higher level of consciusness, we - all humans in the world - would be in a brave new world scenario, actually uncharted territory.


So the same time period as him building FSD and half the time of him getting to Mars.

I mean technically he's not wrong - "never" divided by two is also "never"


In this case the title might as well be, "Man who literally just founded an AI company hypes up AI." I wish journalism would move past this sort of article, it isn't reporting the news, it's making it.


What about FSD, when will it exist?


According to Tesla it has existed since 2016:

https://www.tesla.com/blog/all-tesla-cars-being-produced-now...

That's why there are so many Tesla robotaxis on the road. And why Tesla cars are appreciating assets that it would be financially insane not to buy:

https://techcrunch.com/2019/06/11/elon-musk-calls-it-financi...

https://electrek.co/2019/07/16/tesla-cars-worth-100k-200k-fu...

And that's why Tesla values FSD so much on trade-ins:

https://www.thedrive.com/news/teslas-15k-full-self-driving-o...


Was going to ask the same thing. Looks like the superintelligence will have to keep its robot hands on the wheel.


he is of course, lying. He has started an AI startup and is pumping it.


Also, apparently, AI could take over China from the CCP!


For those who don't know Elon Musk, he's an expert on AI, because he gave up and walked out of OpenAI just as they were about to revolutionize language models, and also made full self-driving AI which can't self-drive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: