This seems very much the beginning of the situation predicted by Aschenbrenner in [1], where the AI labs eventually will be fully part of the national security apparatus. Fascinating to see if the other major AI labs also add ex-military folks to their directors or whether this is unique to OpenAI.
Or conceivably his experience is genuinely relevant and unrelated to US national security going forward, completely unrelated to the governmental apparatus and not a sign of the times.
At least 12 exabytes of mostly encrypted data, waiting for the day that the NSA can decrypt it and unleash all of these tools on it.
Whenever that day happens (or happened) it will represent a massive shift in global power. It is on par with the Manhattan project in terms of scope and consequences.
Soon if not already they can just ask questions about people now.
"Has this person ever done anything illegal?"
Then the tools comb through a lifetime of communications intercepts looking for that answer.
It's like the ultimate dirt finder, but without the outsized manual human effort required to ensure that it's largely only abused against people of prominence.
It's less about the NSA having AI capabilities and more the inverse - the NSA having access to people's chatGPT queries. Especially if we fast-forward a few years I suspect people are going to be "confiding" a ton in LLMs so the NSA is going to have a lot of useful data to harvest. (This is in general regardless of them hiring an ex-spook BTW; I imagine it's going to be just like what they do with email, phone calls and general web traffic, namely slurping up all the data permanently in their giant datacenters and running all kinds of analysis on it)
I think the use case here are LLMs trained on billions of terabytes of bulk surveillance data. Imagine an LLM that has been fed every banking transaction, text message or geolocation ping within a target country. An intelligence analyst can now get the answer to any question very, very quickly.
> I suspect people are going to be "confiding" a ton in LLMs
They won't even need to rely on people using ChatGPT for that if things like Microsoft's "Recall" is rolled out and enabled by default. People who aren't privacy conscious will not disable it or care.
Probably, but so did a lot of people. Computer vision and classifier/discriminator models were pretty common in the 2000s and extremely feasible with consumer hardware in the 2010s.
Maybe he's not the retirement type! There is some nuance in that he's retired, from military service with a 4 month gap until before taking a new non-military role.
There are many ways to read this. One of the most obvious ones is that ChatGPT is becoming more and more disappointing after the initial surprise, so the tech has to be flipped to the MIC.
Thank you. And if you've worked for a consumer tech giant with your eyes open, its a MIC surveillance platform... the denial is that it wont happen to you...
Pro tip: Stop feeding the surveillance industrial complex. Buy your own GPU hardware as soon as you can afford it, and move to llama3 or a comparable model if your application can work with it. Also, do not ever install any desktop screen-capture app for AI analysis purposes.
> ... our leaders confront the classic dilemma of statecraft identified by Henry Kissinger: “When your scope for action is greatest, the knowledge on which you can base this action is always at a minimum. When your knowledge is greatest, the scope for action has often disappeared.” ... AI systems will also be used in the pursuit of power. We fear AI tools will be weapons of first resort in future conflicts. AI will not stay in the domain of superpowers or the realm of science fiction. AI is dual-use, often open-source, and diffusing rapidly.
Eric Schmidt has weird fetishes about not just weapons but military personnel in general:
"The Israeli tank commander who has fought in one of the Syrian wars is the best engineering executive in the world. The tank commanders are operationally the best, and are extremely detail oriented. This is based on twenty years of experience working with them and observing them." —Eric Schmidt.
Start-up Nation by Singer et al (p. 41).
I find that ex-military employees tend to be (but are not always) mindful of details above and beyond the average. I feel the same way about people who've worked at hospitals.
Of course Sigint want to be letting these models scan everything but I guess the really interesting part is how much data the NSA has to make the models even better.
Not sure how valuable that data would be to train models. I’m guessing it’s millions of routine phone conversations and text messages. Maybe a ton of satellite imagery. I wouldn’t consider that to be particularly useful to go from gpt5 to gpt6.
I was thinking industrial espionage and secrets/technology not publicly available but I’d be surprised if the made a database of it and uploaded it to the cloud having though about it more clearly.
I've always wondered if the US Government's AI research is as advanced as their general technological lead (usually 10-15 years ahead of consumer tech). If so, it would mean OpenAI has more to gain with such a partnership than the other way around. And maybe this sort of oversight is a kind of de facto requirement when approaching AGI.
The US government doesn't have a general technological lead. There are a only few specific areas where they have leads by virtue of funding/doing most of the research, like cryptography, sensors, explosives and such. Most of these areas are relatively narrow too. The government is usually buying special versions of readily available commercial products. Any F500 company could buy things just as powerful.
In what fields is the government 10-15 years ahead of consumer tech? That seems like a totally absurd claim.
Technology moves a very long way in 15 years. The only markets this might be true is for specialized military equipment where there is no consumer market other than aftermarket military goods.
>In what fields is the government 10-15 years ahead of consumer tech?
Antenna design.
My employer is building specialized antennas for the military that, once the research, design, and infrastructure has been paid for and put into place by the department of defense, will filter down into enterprise applications and then finally consumer goods.
Dynamic metasurfaces, dynamic polarizations, and dynamic multiband rx/tx for antennas are all coming, eventually, to smartphones. Just got to get them a little smaller and a lot cheaper, but that's coming.
All paid for by AFRL, DARPA, and NRL grants.
I don't work in the field. I would never tempt insanity by working in RF. I just reap the ESPP benefits from my employer raking in the cash by licensing out its tech to various OEMs.
The electronically steered phased array antennas used by SpaceX for Starlink dishes (a consumer product only recently) was already used by the U.S. military 20 years ago. Although 20 years ago such a thing cost seven digits.
FYI, most of the lead has been lost, and I now put that old 00's era 15-20 year marker closer to 5 years, which may help others understand the arms race we are in. (the people vs the supranational oligarchs)
Do you not see the national security aspects of 2024 AI? It's very similar to the nuclear race where, right or wrong, there is a non-zero possibility of a winner take all supremacy scenario where the first to reach superintelligence rules the world.
We don't let private companies own nuclear weapons, yet somehow these razzle dazzle claims of OpenAI's nuclear capabilities are never followed by "so nationalize OpenAI".
No credible AI expert thinks LLMs will lead to AGI. They are hilariously bad at most things and heavily dependent on questionable data sources like reddit.
AI don't need to be competent to cause harm, just like humans. Just ask law enforcement arresting people because their face recognition AI told them to.
Not that I trust governments trying to oversee them either...
There may be a nonzero chance of developing a superintelligence (or even AGI in any meaningful sense), but it's so close to zero that it doesn't matter.
Particularly because if a superintelligence is actually created, it won't matter who did it. It won't be the servant of the creators.
How can you so confidently assert that? AI just needs to beat humans at AI research to start compounding it's growth in performance.
>It won't be the servant of the creators
Human's are very paranoid and competitive because we've had to be for tens of thousands of generations to survive. Super intelligent or not why would AI start ignoring instructions or develop a survival instinct? This line of thinking has been popular in science fiction but what is it based on? Just because something is smarter than us, it will kill us, as Eliezer Yudkowsky asserted "because it wants our atoms"? It seems like we are viewing AI through a human lens.
Superintelligence under human control is what scares the shit out of me. I see superintelligence as the only thing capable of saving humans from themselves.
Because I've seen no strong reason to think otherwise. However, everyone -- myself included -- is just guessing here. I could certainly be wrong. As could the ones of the opposite opinion.
> AI just needs to beat humans at AI research to start compounding it's growth in performance.
That's assuming that it's possible to create an AGI at all given our current direction. I seriously doubt that it is. Time will tell, though.
Part of Snowden's explanation for why he was doing NSA work as a BAH contractor was that this is perfectly normal: employees "leave" and are then employed as contractors to get around limits on number and compensation for federal employees. Keeping their security clearances, of course.
Google, Facebook already have signifiant number of "disinformation staff" from US security state. As another big tech firm that collects huge amount of info and get to decide what to present to the general public, it has be added to that list as well.
As Combatant Commander of USCYBERCOM and previously the Commander of Cyber National Mission Forces, he would have had a lot of experience with DISA policy. OpenAI and Microsoft are currently working to get the Azure OpenAI service a full ATO as part of the FedRAMP process and I'm sure General Nakasone would bring a lot of value there alone. That's before you even get into concerns regarding national security and threats from foreign actors.
Sure, it could be that. Or it could be the uniquely relevant set of skills and insights a former NSA director could bring to a table of mostly tech academics trying to build tech that will definitely touch questions of state security.
I went down the rabbit hole of pretending to be Taiwanese citizen preparing for an invasion and asking for support to defend the country by means of materials science and technology.
The safety system in place prevented it from answering a lot of questions around defense.
I thought it was an interesting example of a safety system causing societal collapse for a sci fi soon to be reality.
Or conceivably his experience is genuinely relevant and unrelated to US national security going forward, completely unrelated to the governmental apparatus and not a sign of the times.
[1] situational-awareness.ai