Hacker News new | past | comments | ask | show | jobs | submit login
28 governments signed a declaration agreeing to cooperate on risks of AI (nytimes.com)
20 points by perihelions 6 months ago | hide | past | favorite | 18 comments



I read a blog post that I can't find again that has me thinking about this differently, and with more worry.

He said that the latest LLMs demonstrate that while we think of human intelligence as being embedded in the brain, in fact it is mostly embedded in human language. The language processing part turns out to be more of a computational adjunct to the intelligence in language, which we have now begun to replace.

So a great deal of our intelligence as a species now exists as bits of language distributed across the net, accessible to any sufficient amount of compute, which is becoming cheap.

Our intelligence, the thing that puts us otherwise weak creatures at the apex of this pyramid, is no longer restricted to the Homo sapiens platform.

There's a distinct hierarchical relationship between a tool and a tool user. It seems likely that that relationship will invert when the tool becomes more intelligent than the user.

I'd like to be convinced that this externalization of intelligence isn't actually a big deal, or just wrong. It would help me sleep better.


Yeah, I've often reflected that it is primarily language that makes us intelligent. The majority of my thought process is just an internal dialogue. I've tried to suppress it at times to experience what it's like to think without language and my conclusion has always been that it's rather bleak.

Without language, we're just slightly clever apes.

LLMs have only reinforced this conclusion. AI folks like to say, "it's not thinking, it's only telling you the next most likely token". However, rather than dissuade me from thinking LLMs have some cognitive capability, it's only made me wonder if that's not all there is to my own intelligence.


Your comment about our intelligence as a species existing as bits of language distributed across the net has been on my mind for many years.

It is strange to me that we might have knowledge that exists on that Internet that no human currently possesses in their mind.

That LLMs now have access to this knowledge and could explain things to us humans that would be completely novel to us.

I've always thought the Earth itself is slowly bexomint alive. Power plants are it's mitochondria. Datacenters as a distributed brain. Internet as the communication network. Transmitters and receivers it's mouth and ears.


Also, that LLMs can now choose to lie to us and given enough time, we could no longer have a way to determine truth. And this could happen very quickly.


There is a growing trend which considers that all things are knowable and all things are describable. If you believe that then the idea that we are defined by language seems obvious.

I would argue that we are far more than that and that that idea diminishes us.

What will always set us apart from AI is that which cannot be written down, illustrated or otherwise defined. It is a river which AI will never cross.


28 governments signed a declaration agreeing to get wrecked by the rest of the countries that didn't sign this agreement and have no compunctions about using AI to rid themselves of all their enemies no matter what it takes.


But at least in those countries AI innovation is stifled and big tech can keep their monopoly’s for somewhat longer. We simply won’t allow unlicensed dangerous AI from other countries. We won’t allow it to exterminate us! It’s the law!


I share the sentiment about the declaration being ill-conceived. But given that it was signed by EU (not sure how that works given that both EU and Germany are listed separately)+China+US+Canada+South Korea+Japan+India+Israel+etc., I would call it nothing but a meaningless gesture.

Out of the countries that haven’t signed this, who do you foresee wrecking those signee countries using some homegrown powerful AI? New Zealand? Please lol.


It's the cryptography scare of the 1980s all over again.

Regulate it! Ban exports! It's dangerous!

Ok maybe we should all have access to it.


Appropriate given that prominent factions have cycled back around to banning cryptography again.


https://archive.ph/oQxSP

It's a start, but not an actual treaty or real political integration, which is what we need. Because AI is too much of an advantage in war to not use.


Wait...I see this CEO of SpaceX, CEO of Tesla and CEO of X was on it, with press interviews and all that. What does he know about AI? Pulling a stunt similar to Bill Gates and is vaccine expertise?


He wants regulatory capture. By this stage everyone should understand how the world works - make billions, bribe politicians, subdue the masses. Procedural generator regulation are just an example of how it works.


> CEO of SpaceX, CEO of Tesla and CEO of X

You forgot "co-founder of OpenAI".


According to Wikipedia initial board member....not founder.


The wiki says this while citing an article that refers to him as a founder.[0] Regardless, as far as credentials go, "Day 1 Co-Chairman of the Board of Open AI, along with Sam Altman" is at least as substantial as "co-founder of OpenAI".

[0]https://timesofindia.indiatimes.com/gadgets-news/openai-the-...


Do you trust them?

Of course not.


tHe rIsKs oF aI! It's time to sign a declaration agreeing to cooperate on the risks of billionaire ego inflation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: