Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI appoints Retired U.S. Army General Paul M. Nakasone to Board (openai.com)
152 points by flawn 3 months ago | hide | past | favorite | 83 comments



This seems very much the beginning of the situation predicted by Aschenbrenner in [1], where the AI labs eventually will be fully part of the national security apparatus. Fascinating to see if the other major AI labs also add ex-military folks to their directors or whether this is unique to OpenAI.

Or conceivably his experience is genuinely relevant and unrelated to US national security going forward, completely unrelated to the governmental apparatus and not a sign of the times.

[1] situational-awareness.ai


LLMs are exactly what that NSA datacenter in Utah was built for.

It's gonna be wild to see what secret needles come out of that haystack.


At least 12 exabytes of mostly encrypted data, waiting for the day that the NSA can decrypt it and unleash all of these tools on it.

Whenever that day happens (or happened) it will represent a massive shift in global power. It is on par with the Manhattan project in terms of scope and consequences.


I've thought the same.[0]

Soon if not already they can just ask questions about people now.

"Has this person ever done anything illegal?"

Then the tools comb through a lifetime of communications intercepts looking for that answer.

It's like the ultimate dirt finder, but without the outsized manual human effort required to ensure that it's largely only abused against people of prominence.

[0] https://news.ycombinator.com/item?id=35827243



You don't really need a person inside the LLM provider to just use the LLM tech. This is more than that.


They’re already filled with foreign spies, so we may as well have our own in there too…


The nsa had AI usage long before LLMs were here


It's less about the NSA having AI capabilities and more the inverse - the NSA having access to people's chatGPT queries. Especially if we fast-forward a few years I suspect people are going to be "confiding" a ton in LLMs so the NSA is going to have a lot of useful data to harvest. (This is in general regardless of them hiring an ex-spook BTW; I imagine it's going to be just like what they do with email, phone calls and general web traffic, namely slurping up all the data permanently in their giant datacenters and running all kinds of analysis on it)


I think the use case here are LLMs trained on billions of terabytes of bulk surveillance data. Imagine an LLM that has been fed every banking transaction, text message or geolocation ping within a target country. An intelligence analyst can now get the answer to any question very, very quickly.


> I suspect people are going to be "confiding" a ton in LLMs

They won't even need to rely on people using ChatGPT for that if things like Microsoft's "Recall" is rolled out and enabled by default. People who aren't privacy conscious will not disable it or care.


Why do you assume NSA have ChatGPT queries?


Why wouldn’t they, after the Snowden revelations?


Because ChatGPT is a sizable domestic business, and most large data collectors are enrolled in the NSA's PRISM program whether they like it or not.


Probably, but so did a lot of people. Computer vision and classifier/discriminator models were pretty common in the 2000s and extremely feasible with consumer hardware in the 2010s.


https://en.wikipedia.org/wiki/Paul_Nakasone

He was the director of the NSA AND head of United States Cyber Command from 2018 until his retirement in Feb 2024 - so very recent!


The dual posting it’s important but does not mark Nakasone as uniquely talented — the post is always served by the current director of the NSA.


Maybe he's not the retirement type! There is some nuance in that he's retired, from military service with a 4 month gap until before taking a new non-military role.


There are many ways to read this. One of the most obvious ones is that ChatGPT is becoming more and more disappointing after the initial surprise, so the tech has to be flipped to the MIC.


Military Industrial Complex


It always was. Tech is MIC.


Silicon Valley/SF wouldn't be what it is today with all the defense tech investments in the 1950s and onwards, paving the way for more "consumer" tech


Thank you. And if you've worked for a consumer tech giant with your eyes open, its a MIC surveillance platform... the denial is that it wont happen to you...


Downvote all you want, denial.


OG comment is +5 so 4 of you in complete denial :D


Pro tip: Stop feeding the surveillance industrial complex. Buy your own GPU hardware as soon as you can afford it, and move to llama3 or a comparable model if your application can work with it. Also, do not ever install any desktop screen-capture app for AI analysis purposes.


Shoutouts to Metal Gear Solid 2, not much else left to say here.


Its commentary on AI and surveillance was really ahead of its time.


All of the MGS games are!



Ex-Google Eric Schmidt chaired U.S. NSCAI (National Security Commission on Artificial Intelligence), it's worth reading the final report (2021), https://web.archive.org/web/20230211035940/https://www.nscai...

> ... our leaders confront the classic dilemma of statecraft identified by Henry Kissinger: “When your scope for action is greatest, the knowledge on which you can base this action is always at a minimum. When your knowledge is greatest, the scope for action has often disappeared.” ... AI systems will also be used in the pursuit of power. We fear AI tools will be weapons of first resort in future conflicts. AI will not stay in the domain of superpowers or the realm of science fiction. AI is dual-use, often open-source, and diffusing rapidly.


Eric Schmidt has weird fetishes about not just weapons but military personnel in general:

  "The Israeli tank commander who has fought in one of the Syrian wars is the best engineering executive in the world. The tank commanders are operationally the best, and are extremely detail oriented. This is based on twenty years of experience working with them and observing them." —Eric Schmidt.
  Start-up Nation by Singer et al (p. 41).


I find that ex-military employees tend to be (but are not always) mindful of details above and beyond the average. I feel the same way about people who've worked at hospitals.


> ex-military employees

Schmidt is talking about combatants. Thankfully for you, you haven't crossed any psychopathic murderous fascists.


All tank commanders are perfect fit for next Napoleon.


> OpenAI appoints Retired U.S. Army General to board.

Very typical move of a company planning to enter the military supplier space.


Of course Sigint want to be letting these models scan everything but I guess the really interesting part is how much data the NSA has to make the models even better.


Not sure how valuable that data would be to train models. I’m guessing it’s millions of routine phone conversations and text messages. Maybe a ton of satellite imagery. I wouldn’t consider that to be particularly useful to go from gpt5 to gpt6.


I was thinking industrial espionage and secrets/technology not publicly available but I’d be surprised if the made a database of it and uploaded it to the cloud having though about it more clearly.



I've always wondered if the US Government's AI research is as advanced as their general technological lead (usually 10-15 years ahead of consumer tech). If so, it would mean OpenAI has more to gain with such a partnership than the other way around. And maybe this sort of oversight is a kind of de facto requirement when approaching AGI.


The US government doesn't have a general technological lead. There are a only few specific areas where they have leads by virtue of funding/doing most of the research, like cryptography, sensors, explosives and such. Most of these areas are relatively narrow too. The government is usually buying special versions of readily available commercial products. Any F500 company could buy things just as powerful.


Do you think there's more exascale supercomputers on Earth than are publicly acknowledged?


In what fields is the government 10-15 years ahead of consumer tech? That seems like a totally absurd claim.

Technology moves a very long way in 15 years. The only markets this might be true is for specialized military equipment where there is no consumer market other than aftermarket military goods.


>In what fields is the government 10-15 years ahead of consumer tech?

Antenna design.

My employer is building specialized antennas for the military that, once the research, design, and infrastructure has been paid for and put into place by the department of defense, will filter down into enterprise applications and then finally consumer goods.

Dynamic metasurfaces, dynamic polarizations, and dynamic multiband rx/tx for antennas are all coming, eventually, to smartphones. Just got to get them a little smaller and a lot cheaper, but that's coming.

All paid for by AFRL, DARPA, and NRL grants.

I don't work in the field. I would never tempt insanity by working in RF. I just reap the ESPP benefits from my employer raking in the cash by licensing out its tech to various OEMs.


Aviation is a big one. The SR71 was in the sky in 1964. The gap is probably even closer to 20 years for "consumer" Aviation tech.


The electronically steered phased array antennas used by SpaceX for Starlink dishes (a consumer product only recently) was already used by the U.S. military 20 years ago. Although 20 years ago such a thing cost seven digits.


FYI, most of the lead has been lost, and I now put that old 00's era 15-20 year marker closer to 5 years, which may help others understand the arms race we are in. (the people vs the supranational oligarchs)


Ex-NSA…how are we supposed to feel about this?


We should be horrified and thank the many people who have blown the whistle about Sam Altman. He is dangerously greedy and has no principles.


Do you not see the national security aspects of 2024 AI? It's very similar to the nuclear race where, right or wrong, there is a non-zero possibility of a winner take all supremacy scenario where the first to reach superintelligence rules the world.


We don't let private companies own nuclear weapons, yet somehow these razzle dazzle claims of OpenAI's nuclear capabilities are never followed by "so nationalize OpenAI".


No credible AI expert thinks LLMs will lead to AGI. They are hilariously bad at most things and heavily dependent on questionable data sources like reddit.


AI don't need to be competent to cause harm, just like humans. Just ask law enforcement arresting people because their face recognition AI told them to.

Not that I trust governments trying to oversee them either...


There may be a nonzero chance of developing a superintelligence (or even AGI in any meaningful sense), but it's so close to zero that it doesn't matter.

Particularly because if a superintelligence is actually created, it won't matter who did it. It won't be the servant of the creators.


You are extremely confident about what all 3 top AI labs and top 3 cited experts on the subject disagree with.

Should I repeat the common quote of the problem in this world is....


Yes, I am. I'm not certain, of course, which is why I acknowledge a nonzero chance of it.

There is not actually consensus about this amongst the experts in the field.


As I said, there actually is. All top 3 scientists and all 3 top labs that have built the tech you are referring too. It's unanimous.

Sources: h-index of AI rankings LLM rankings

Everyone can look them up and what their stance is.


>it's so close to zero that it doesn't matter.

How can you so confidently assert that? AI just needs to beat humans at AI research to start compounding it's growth in performance.

>It won't be the servant of the creators

Human's are very paranoid and competitive because we've had to be for tens of thousands of generations to survive. Super intelligent or not why would AI start ignoring instructions or develop a survival instinct? This line of thinking has been popular in science fiction but what is it based on? Just because something is smarter than us, it will kill us, as Eliezer Yudkowsky asserted "because it wants our atoms"? It seems like we are viewing AI through a human lens.

Superintelligence under human control is what scares the shit out of me. I see superintelligence as the only thing capable of saving humans from themselves.


> How can you so confidently assert that?

Because I've seen no strong reason to think otherwise. However, everyone -- myself included -- is just guessing here. I could certainly be wrong. As could the ones of the opposite opinion.

> AI just needs to beat humans at AI research to start compounding it's growth in performance.

That's assuming that it's possible to create an AGI at all given our current direction. I seriously doubt that it is. Time will tell, though.


love this take.


If you swap out superintelligence with the market we're long past the point of no return. I don't see how ai could be any worse.


Ex director of the NSA, but to be fair it has been a whole four months since then. What are the chances he still has connections with the agency?


> What are the chances he still has connections with the agency?

Inconceivable


Part of Snowden's explanation for why he was doing NSA work as a BAH contractor was that this is perfectly normal: employees "leave" and are then employed as contractors to get around limits on number and compensation for federal employees. Keeping their security clearances, of course.


Definitely zero.


Yeah, they brain-wipe you after you leave the NSA /s


Or rather that agency has still connections to him


It looks to me like a sign that OpenAI is getting more comfortable with revealing their true colors.


Google, Facebook already have signifiant number of "disinformation staff" from US security state. As another big tech firm that collects huge amount of info and get to decide what to present to the general public, it has be added to that list as well.


[eye twitching]

... Good. Everything's so good.


Will there soon be regulator-industry revolving doors for AI?


I noticed that GPT-4o is much more competent in analyzing photos of various aircraft and answering questions about aerial photos than GPT-4.

I thought that my finding was coincidental, sample size of 1 and pure bias. Maybe it isn't, maybe it really got much better.


That is to be expected. GPT4o outperforms GPT4 on all vision tasks that I have thrown at it so far. It lacks GPT4s reasoning capabilities though.


As Combatant Commander of USCYBERCOM and previously the Commander of Cyber National Mission Forces, he would have had a lot of experience with DISA policy. OpenAI and Microsoft are currently working to get the Azure OpenAI service a full ATO as part of the FedRAMP process and I'm sure General Nakasone would bring a lot of value there alone. That's before you even get into concerns regarding national security and threats from foreign actors.


Well lads, it was fun while it lasted


Funny title, but I assume it’s to improve appeal to government.


Sure, it could be that. Or it could be the uniquely relevant set of skills and insights a former NSA director could bring to a table of mostly tech academics trying to build tech that will definitely touch questions of state security.


What is funny about it? It's about as a succinct descriptor to the situation as possible.


Former director of the National Security Agency (NSA)…good to know the U.S. government keeps tabs on my ChatGPT prompts :)


I went down the rabbit hole of pretending to be Taiwanese citizen preparing for an invasion and asking for support to defend the country by means of materials science and technology.

The safety system in place prevented it from answering a lot of questions around defense.

I thought it was an interesting example of a safety system causing societal collapse for a sci fi soon to be reality.


You're not self-censoring in your prompts? Lagging behind!


Well hopefully this means they will begin taking security seriously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: