Utterly predictable. AI will be weaponized and anybody working on it is going to have to live with that knowledge. Consider yourselves part of the MIC from now on, no more bs about doing this for the betterment of humanity.
And to add to that: once any party figures out how to do this it is a matter of time before the rest does too, there is no such thing as a secret. The atomic bomb leaked and so will the recipe for AGI. So not only are you working for the MIC of your own country you are also enabling your future enemies.
At least Ilya's crew was working on alignment (https://openai.com/blog/introducing-superalignment). If only the rank-and-file had been more vocally supportive of that, instead of enthusiastically boarding the Altman train. Look where that train is headed...
Alignment with whose values? Altmans? Ilya's? Humanity's? The USA? Some unspecified ideal? I have a really hard time passing the responsibility for such massive impact decisions to a bunch of talented technicians who have already demonstrated a poor command of ethics. The more likely outcome is that it will end up being 'alignment with whoever has the money', and that's a recipe for some bad stuff in our future.
My point is that Americans can and will overthrow any regime that makes their lives hard(er). It'll take a LOT to encourage them, but they will do so, and there's even ways to do so peacefully and without violence. Elections, then state laws, then an Article V convention.
That's like asking "programs that execute within a predictable scope for what?"
For whatever they're being written for. Alignment's goal is to have models to do what they're being trained to do and not other random things. It won't be uniform; for example, determining "what does inappropriate mean" will vary between countries.
More like self driving cars that stay within the lines instead of treating everything as an offroad opportunity. If you wanna use rifles from them or mod them to run people over, that's on you right?
The US will be at the forefront for just as long as it will take to smuggle a couple of USB sticks out of OpenAI and I figure the chances of Chinese plants at OpenAI to be roughly 100%.
Microsoft has the models running in it's Azure datacenters in Europe already. I'm guessing there are other organizations who have it where it hasn't been made public. At this point I'd be surprised if it hasn't been leaked to other governments.
But as other comments say, China has smart people too. They've had facial recognition and other invasive mas data collection systems running at mass scale for years. They have the advantage of a lot of data they can use for training.
> It's maybe also that US population tends to underestimate how other people are smart (US-centrism really does exist).
They do.
> Chinese people are very smart, and there is technically more of them, so I am not surprised that they are releasing amazing open models.
They also have an educational system that wastes less talent and have fewer - if any - roadblocks to the will of the party bosses. In a war a dictatorship can move in ways that a democracy is ill equipped to follow simply because there is no dissent. That's why it took half the world to take on three relatively little countries in WWII.
> China has no problems to push their own models for free, and this is a real strategic advantage.
It is, but I for one wouldn't use them.
> These models are aligned with Chinese values, but well, American models are aligned with American beliefs and values as well, right ?
Yes, but I'm far less concerned with the present day models than I am with the advent of AGI which is what OpenAI and various international competitors are aiming for and if one shows it can be done before you can blink this crap will be all over the world. After that point all bets are off.
The biggest problem won’t be AGI. It will be the thousands of shitty AI and ML models which predict things with 99.9% accuracy, meaning people (read judges etc) assume it’s 100% accurate regardless of how often it gets used.
Look at the Postmaster General scandal in the UK. Now imagine that in all systems, because AI in inherently statistical in nature.
You don't need AGI to be able to implement abusive policies. But it definitely helps if you want to be able to do it at a scale humanity is not in a position to cope with. Stable dictatorships are a very likely outcome of such technology. Also in places where we currently do not have dictatorships.
> They also have an educational system that wastes less talent and have fewer - if any - roadblocks to the will of the party bosses. In a war a dictatorship can move in ways that a democracy is ill equipped to follow simply because there is no dissent. That's why it took half the world to take on three relatively little countries in WWII
That last part is really scary. I wanted to add that their tiktok algorithmically encourages young people to achieve and push themselves where our tiktok algorithmically encourages young people to do stupid or whoreish stuff for likes.
As an immigrant to the USA I'd be really happy to work for MIC without leaving the tech industry (I'm too selfish to accept lower salary + move to the East Coast + deal with, presumably, a huge bureaucracy).
Atomic bomb still hasn't leaked beyond ~10% of the countries and many had to do it ~from scratch, so it's not a very good example. The choice with AI is either make one, or wait for it to leak from those who do anyway.
> Atomic bomb still hasn't leaked beyond ~10% of the countries and many had to do it ~from scratch, so it's not a very good example.
On the contrary, it's a fantastic example because we've been living under the shadow of them ever since they were invented. And long term the chances of them being used again is 100%.
The GP used it as an example of the technology that would also leak to one's enemies. Atomic bomb is a poor example, it barely leaked and was anyway developed by everybody's enemies. It's better to be the first ones in this case.
We haven't even properly absorbed the impact of the last two tech revolutions, and the interval between them is shortening. I wasn't/still am not much of a believer in the acceleration scheme but the last two decades and a somewhat serious review of the last 200 years has me wondering if there isn't more to it than it seems.
It makes most sense to me if you think of change as a result of people doing things and sharing information and the size of the population. About 7% of the people who ever lived are are alive today.
And to add to that: once any party figures out how to do this it is a matter of time before the rest does too, there is no such thing as a secret. The atomic bomb leaked and so will the recipe for AGI. So not only are you working for the MIC of your own country you are also enabling your future enemies.