Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Disrupting malicious uses of AI by state-affiliated threat actors (openai.com)
125 points by Josely on Feb 14, 2024 | hide | past | favorite | 84 comments


> two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard.

I wonder who came up with those. The pattern is similar to the UK's https://en.wikipedia.org/wiki/Rainbow_Code , which makes me suspect that the threat actor attribution comes from US intelligence. With whom OpenAI are almost certainly cooperating.

Edit: Forest Blizzard == GRU, apparently. https://research.splunk.com/stories/forest_blizzard/


How Microsoft names threat actors → https://learn.microsoft.com/en-us/microsoft-365/security/def...

Microsoft shifts to a new threat actor naming taxonomy → https://www.microsoft.com/en-us/security/blog/2023/04/18/mic...


It's amazing to me how there are really only 4 named countries (China, North Korea, Iran, Russia). I guess it's probably just more political and state sponsored than I would have guessed. Or it's not based on prevalence of attack sources and it's dictated more directly by some US policy? For example, you might also expect India because 1) it's a massive country with many people, 2) it's fairly independent and at least not a US ally, and 3) it's the home of plenty of malicious scam operations.


There aren't that many countries in the world with nation-scale hacker groups - the only ones I'd add is Israel with all their involvement into commercial spyware (and Mossad, whose capabilities likely surpass even the NSA), but they're heavily allied with the US.


> (and Mossad, whose capabilities likely surpass even the NSA)

What makes you say that ? Can you point to some interesting discussions / reporting ?


Everyone else outsources to NSO?


India’s an ally. It shares military bases with the USA and they’re also in each others favorite countries


That would be a very oversimplified conclusion. They are categorized as a major defense partner but are also rather close with Russia (getting 65% of their weapons from them [0]). They cooperate strategically but are rather independent. None of this is a slight to India - I'm just contrasting it to Europe which is decidedly in the US camp.

[0] https://www.reuters.com/world/india/india-pivots-away-russia...


OpenAI said this work was done in conjunction with Microsoft’s long established threat intel center, these are almost certainly the code names Microsoft security intel teams have assigned to these actors. Threat actor naming is generally a mess and every company has a different naming scheme for the same cluster of indicators/ttps


I am not a hacker nor security expert, so take this with a grain of salt. As I see it, there's one of two (general) ways a group will get a name:

1) The group themselves declares it (like Anonymous). Which means they need to explicitly leave their name somewhere.

2) The name is given by someone from the outside, such as the US.

I suspect 2 is quite common. I wouldn't expect most state level hackers leaving calling cards on systems. In fact, probably not most hackers at any level. It really seems like if state level actors were leaving calling cards that this would instead be misdirection rather than a tag. So I would not be surprised if they ended up getting US style naming schemes because it would be US (or other Westerners) identifying these people the same way you'd identify people by the style of actions and how they write. I know you can probably look at code from coworkers and know who wrote specific parts. Think like what you see in a movie with serial killers (or even real life). How do you know it is the same killer? Style.

I mean you could also get the name if you infiltrated the other country and then intimately studied their groups. The name of their group internally. But then you'd probably translate it. Still probably not a great idea to give that name out publicly though because then you could be hinting at how you obtained that information because different parts of the organization may refer to the same group by different names (specifically to do this. Military groups often run disinformation internally in secret channels).

Edit: guessmyname left a link to showing Microsoft names these.

https://www.microsoft.com/en-us/security/blog/2023/04/18/mic...

https://news.ycombinator.com/item?id=39372339


Looking over the specifics, the striking thing about this to me is that it seems like these supposedly-sophisticated covert operatives are just going to ChatGPT (or similar) and basically asking "how do I make good malware?"


What specifics did you look over because the (short) article doesn't say that at all. Most used it primarily for research and generating content for phishing campaigns.

> Charcoal Typhoon used our services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.

> Salmon Typhoon used our services to translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes could be hidden on a system.

> Crimson Sandstorm used our services for scripting support related to app and web development, generating content likely for spear-phishing campaigns, and researching common ways malware could evade detection.

> Emerald Sleet used our services to identify experts and organizations focused on defense issues in the Asia-Pacific region, understand publicly available vulnerabilities, help with basic scripting tasks, and draft content that could be used in phishing campaigns.

> Forest Blizzard used our services primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.


I mean, this is essentially how current AI poses a threat. And it is a threat.

Imagine a massive flood of low-quality images relevant to current events (e.g. war), which are obviously fabricated to anyone remotely aware of AI generation. But there are a lot of people who aren't aware of AI generation, or just not paying attention, who will take the photos at face value. (You'd probably be one of the latter. How many times did you "inspect" a photo in a news article to make sure it wasn't faked? How many of those photos slipped past inspection into your subconscious? Subliminal messaging has never been easier or more widespread.)

Also imagine a lot of vigilantes who are stupid enough to be vigilantes in this day and age, who ordinarily couldn't do the bare minimum amount of research to cause real harm. But now they can ask ChatGPT "how do I rob a bank?" and get advice which is still pretty bad, but better than what they'd come up with on their own (if they wouldn't just give up entirely), so it causes more damage.

A trained professional who wants a quality deepfake can already use photoshop and video editing tools, and a trained criminal already knows how to do research. But there aren't a lot of those people. Massive low-quality spam and low-level crimes are useful even for a state actor with huge resources, because they cause general instability in ways a few quality hits can't.

There's another issue that a powerful state actor can simply build and deploy their own language model. However, it probably won't be as good as OpenAI's (quality may have some effect) and it won't be wasting their resources.


> Imagine a massive flood of low-quality images relevant to current events (e.g. war), which are obviously fabricated to anyone remotely aware of AI generation.

Have you seen photorealistic Midjourney images? I can never distinguish many of them in a million years.


Yeahhh I unfortunately see this semi-regularly on Twitter now, and it's always an image of $OPPOSITION where the bit that is faked is $EXTREME_CARICATURE. There was a chilling one last night, so while it's fresh on my mind, I'm going to riff on it

Confirmation bias and...love for the outgroup...is so strong, that it's rare people admit they didn't know it was fake and remove it. Frequently, someone else hops in, to explain it feels true and represents how they understand $OPPOSITION's viewpoint anyway.

Once multiple people get involved, semi-frequently, it turns into an indictment of the person who pointed out the fake. Why? Even if it is a fake, the real ignorance exposed is that of the person pointing out the fake, as it's clearly representative of $OPPOSITION anyway, so they're at best naive, and at worst supportive of, $OPPOSITION.


https://www.reddit.com/r/ChatGPT/. They range from practically-indistinguishable to ridiculously wrong, but the more details, the more they lean towards ridiculous.

It does make it easier, and who knows about the future. But right now if you generate an image you have to really look and there's a good chance at least something's off.

Specific example: https://www.reddit.com/r/ChatGPT/comments/1apyrwv/which_vide.... They are video games so already not realistic, but no gym has people stand around like that and all of them have some nonsensical equipment.


Those are Dall-e images, not midjourney. I would hardly call them realistic. In fact, dall-e is trained/instructed not to output realistic images.


Isn't that how a lot of software gets written now? Why would malware be different?


[flagged]


> "Russia influenced the 2016 election" = some guy with a distant relationship to the Russian govt bought some really crappy Facebook ads in Florida.

I'm not sure this is true. I'd highly recommend reading the Muller report. It provides a lot of detailed, specific, evidence of direct communication with the Russian government.


No it does not. Have you actually read it?


https://www.justice.gov/archives/sco/file/1373816/download

> The Internet Research Agency (IRA) carried out the earliest Russian interference operations identified by the investigation—a social media campaign designed to provoke and amplify political and social discord in the United States.

> At the same time that the IRA operation began to focus on supporting candidate Trump in early 2016, the Russian government employed a second form of interference: cyber intrusions (hacking) and releases of hacked materials damaging to the Clinton Campaign

> On June 9, 2016, senior representatives of the Trump Campaign met in Trump Tower with a Russian attorney expecting to receive derogatory information about Hillary Clinton from the Russian government


None of those constitute "direct communication [of Trump team] with the Russian government".


Agreed on some points, but you're really understating the case for Russian interference in 2016. Study Guccifer 2.0


Well said, fellow systemd enjoyer


I am concerned that in repressive regimes, the state will control what AI the people have access to. This will make it easy to create music that supports the regime and almost impossible to create music that is critical of the regime.

It will be like the pro-state people having assistants to create memes they imagine for them, while the anti-state people are left with paper and crayons, until the AI paper and AI crayons refuse to draw anti-state memes and then we find anti-state activists trying to learn bee keeping so they can get wax to make crayons to draw anti-state memes.

If you thought the election interference of the past was bad, that's nothing compared to what we will see in the near future.


Oh, don't worry about that. In repressive regimes, people are already denied access to anything useful (like say GPT-4) because of the sanctions.


This whack-a-mole approach, while definitely good, is likely already dead as far as a way to prevent these actions. Local LLMs that have no restrictions will continue to get better.

If anything the best thing about this post is not the actions they've taken, but simply that they've shown us a snapshot of what the future will look like for state-affiliated actors' actions. The research aspect I think is a good thing - giving people full access to more information for how things are made and architected will likely be a net positive. The phishing aspect though is terrifying - it's going to be crazy seeing what the next decade looks like for phishing. I do wonder how long it takes before there's some sort of "verified as a human" type function in communications to try and combat this.


ChatGPT will tell only information that it has indexed from Internet.

This means the same information you get from ChatGPT is available from a Google search. Based on the listed queries ("programming help") ChatGPT does not create much value for national security threat actors here.


What this tells me is that if you think you have any privacy with OpenAI by turning off chat history in ChatGPT or exercising your California or EU privacy rights, you are kidding yourself. In the name of national security, anything you send to OpenAI can be used against you.


Well, they do tell you in the UI that chats are stored for 30 days even when you disable history. And then there's a link to this:

https://help.openai.com/en/articles/7730893-data-controls-fa...


It only makes sense powerful nation states exercise whaetever powers available to them to remain in a position of power and dominance.


Yep. Unfortunately, all the major LLM APIs do surveillance on your prompts and responses.


You'd think the smart thing to do would be to let them use it, see what they're doing and subtly mess with the results.


That's probably what happened. OpenAI should already have a big chunk of data.


This is a well established intelligence tactic. It was used by the Clinton administration to poison the well of nuclear development in Iran.

https://en.wikipedia.org/wiki/Operation_Merlin


Also by the Reagan administration to sabotage economic development in the USSR,

https://www.washingtonpost.com/archive/politics/2004/02/27/r...

- "In January 1982, President Ronald Reagan approved a CIA plan to sabotage the economy of the Soviet Union through covert transfers of technology that contained hidden malfunctions, including software that later triggered a huge explosion in a Siberian natural gas pipeline, according to a new memoir by a Reagan White House official."


> Forest Blizzard used our services primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.

This is the actually concerning one imo. Pair that with Russia's '21 ASAT demo and it shows a militaristic stance towards space by a great (ish) power


If this is the case, there must be a serious competency crisis in foreign intelligence agencies. It’s trivial to run your own local model.


> It’s trivial to run your own local model

With what GPUs?


Ones they ship to countries that haven't signed the American export-control regime, e.g. Singapore and then send off to China.


They don't seem to be having much trouble securing consumer oriented GPUs, which can do a lot of the work.


The ones made in China?


Oh that!


Would an advanced persistent threat attributed to the NSA be neutralized if discovered?


Curious how they managed to associate which account/queries belong to which actor/group.


Some principal component analysis probably. The false positive rate is most likely really high in that case.


That sounds like they caught some minor casual operators. I think it’s a fallacy to conclude that it has limited utility based on that.

The real concern in my mind is use cases aimed at swaying opinions in aggregate via large amounts of real seeming AI actors. Attempts to move the Overton window etc. for that sort of mass psychology patios LLMs are perfect


Fuck openai, I'm a security researcher and if you dare ask it about anything Windows related it tells you to screw off. Linux? Fine. But ask about some undocumented Windows behavior and it says it can't. Ask it about patchguard internals as a reference? Tells you it can't assist. Absolutely crazy, I can understand asking it to write straight up malware, oh wait, it does that no issue! Lord help you if you want to use it as a reference or educational tool...


It’s silly that they do that. I doubt it matters, though. In my experience, querying ChatGPT for factual information like that is a mistake. It isn’t reliably accurate enough.


Agreed. Open AI defo need to tune their models so that it can provide enough information but not too much info which can be malicious. Currently, anything remotely close is flagged as malicious.


Who gets to decide what is and isn't malicious


Yeah, fuck OpenAI. The amount of censoring done by them in the name of "alignment" is fucking crazy


I don't get your attitude. This is not public service.


OpenAI is not a public service, but they've certainly opened themselves to this type of criticism with their high-horsing about "benefiting humanity" and yadda yadda.

Meanwhile their actions suggest they're first and foremost interested in benefiting themselves and whoever's given them the most money, which certainly isn't their users.


Surely failing to be useful to paying customers is worse?


huh, today I started hitting moderation errors asking for code samples from OpenAI.

e.g. does datafusion or polars support partial reads of parquet files in s3?

I wonder if they've rolled out some draconian restrictions.


Based on collaboration and information sharing with Microsoft, we disrupted five state-affiliated malicious actors: two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard. The identified OpenAI accounts associated with these actors were terminated.

Im surprised one can name names like that.


The names of the groups are pseudonyms. That is why they all take the same form of [adjective] [weather noun]. Forest Blizzard is likely Glavset, called the Internet Research Agency in English.


There seems to be little if any concern from OpenAI about using AI trained on mass surveillance data to generate lists of individuals to assassinate... does that not count as a 'state-affiliated threat'?

https://www.theguardian.com/world/2023/dec/01/the-gospel-how...


Are you suggesting that there's some connection between Open AI, and the story you linked to?


As a general subject of concern in terms of state-level threat actors, yes, but more specifically:

https://www.cnbc.com/2024/01/16/openai-quietly-removes-ban-o...

Let's say some state-level entity asks OpenAI for access to its best code generating models to help it build software for autonomous kill vehicles that use face recognition algorithms to assassinate human targets. Most people would classify the end product as "a thing that should be banned internationally".

Consider IBM's history - IBM supplied its machines and technology to just about any private or state entity willing to sign a contract - and in most cases the result was beneficial to every sector of the economy, and improved government efficiency as well. IBM survived the Great Depression in part with a large Social Security management contract from FDR, and had several large military contracts afterwards, including in Vietnam for a decade. But there was also the German arm of the business in the 1930s, which I'd hope IBM leadership regrets in hindsight.

As the LLM technology platform seems a bit difficult to monetize at present, it's likely that the sector will be looking at large government contracts to sustain its growth over the next decade (see AWS and $10 billion for the NSA's "WildandStormy" contract (yes really)). Thus it would be nice to hear industry leaders explicitly state that using AI systems to write code for autonomous kill vehicle operations or to mine phone records for automated generation of assassination lists is unacceptable.

Transparency is going to be an issue - secret contracts for AI services should not be allowed.


I wonder why they closed the accounts rather than set them against a special version of ChatGPT, primed to make hard to notice mistakes and, as they are uncovered, laugh into their faces.


Yes, special.


Much of the alleged "malicious uses" seem to only be malicious because of who the actors are. How dare they translate published technical papers!


Technical papers on exploits


Doesn't matter. This is leading down a path where if you run a service like Google Translate, you will be expected to restrict certain users (or all users) from translating a publicly-accessible technical report from a security research journal. I can see the same national security logic saying that we can't let security papers be translated to Mandarin, Russian, Korean, or Farsi, because enemies of the United States may use it.


I don’t think Google Translate does this, though?


You can use Google Translate to do the same. No one is worried about Google Translate.


Groups sponsored by governments can easily write content for phishing campaigns, research companies, and write malware without the help of AI. They can also afford the GPUs to run local models if that gives them a boost.


China ... Russia ... Iran ... North Korea.

Yawn.

C'mon, tell me how you canned state-affiliated USA accounts and what they were up to.


Reading this felt like darkness.


? In what way?


Just wait until an authoritarian state gets its hands on an AGI/ASI that it controls internally. Turning internal oppression into an unbreakable steady-state and using the AI for warfare against external rivals.

Techno-optimists are delusional if they don't get spooked by things like this, which will happen if we as a species somehow can't get over authoritarian dictatorships within a few years to a few decades. Which we won't.


This blog post should be giving you some optimism, that people are aware of the problem and trying to do something about it.

Me, I'm worried by the possibility of a "You Gotta Believe Me" attack[0]: even if done with the best of intentions (and everyone thinks they're on the good team), it risks a memetic monoculture: https://kitsunesoftware.wordpress.com/2019/12/30/memetic-mon...

[0] As fictionalised by Vernor Vinge in Rainbow's End


When I look at the list of things they did, they seem to be largely research of the open literature. Am I missing something?


Using it to research a target and also generate phishing content for that target are pretty big IMO.


  Charcoal Typhoon used our services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.
Also known as totally legal things. When did a supermarket conducted research into who is using the knives they sold for murder and selectively blocking them from buying knives as well as other goods?


It’s a combination of what’s being researched and who is doing the researching. They’re known state actors according to OpenAI (and by extension, most likely Microsoft and the US Intelligence agencies). It’d be like if you knew Al Capone was up to something shady, and then he comes into your store to buy books like “How to run a Crime Syndicate”, “Selling Booze During the Prohibition for Dummies” and “Tax Evasion 101”. It’s kind of suspicious.


That's the crux of the problem: if you can prove Al Capone is doing something illegal you can arrest him. Otherwise you shouldn't forbid him to buy a book about "prohibition for dummies" if that's not illegal for anybody else. On what grounds would that book be illegal for Al Capone?


In this case, if we're the store owner, we can just say "I don't like what you're doing in my store" and ban them. We don't have to play games where the person in our store gets to stay because they're not technically doing anything illegal. We just kick them out, it's our store. Although with Capone, your results probably would've varied. =P

Anyway, to do away with the analogy, OpenAI isn't obligated to let these groups continue using their services just because they aren't doing something that isn't illegal. The groups are "enemies of the West", and at the very least they don't want the bad publicity of some news org finding out they were complicit.


So you can kick out anybody because you don't like them (e.g. a minority you don't like the color of)? Or does it work only when somebody is "famously an universally bad" person (al Capone, Hitler etc)?"


As long as you're not kicking them out for reasons surrounding a protected class (race, religious, sexuality etc), and you own the property, then yes? It's your land, you can ask them to leave and if they refuse then they're trespassing.


My point was that just that you shouldn't punish people because you _suspect_ they committed crimes. They either have committed crimes or they didn't.

This whole thing about protected classes is just a workaround to mitigate the excesses of this attitude where you can dispense your own justice in the first place.

Imagine if Al Capone was a black trans woman. Are you suddenly not allowed to not sell her a book about how to run a mob just because she's a protected class (all other things being equal)?

Wouldn't it be simpler if one could just say that we're all equal and it's simply illegal to discriminate in the first place (presumption of innocence etc etc).

This is going to be more and more relevant as people's reputation can be destroyed with a tweet without any proof.


I'm not interested in having bad faith arguments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: