Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
DeepSeek R1 Appears to Be a CCP Cyberespionage Stunt Involving Possible Murder (winbuzzer.com)
6 points by Babawomba 6 months ago | hide | past | favorite | 9 comments


https://www.garanteprivacy.it/home/docweb/-/docweb-display/d...

translation:

AI: Privacy Guarantor asks DeepSeek for information Possible risk to the data of millions of people in Italy

The Guarantor for the protection of personal data has sent a request for information to Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, the companies that provide the DeepSeek chatbot service, both on the web platform and on the App.

The Authority, considering the potential high risk for the data of millions of people in Italy, has asked the two companies and their affiliates to confirm which personal data are collected, from which sources, for which purposes, what is the legal basis of the processing, and whether they are stored on servers located in China.

The Guarantor also asked companies what kind of information is used to train the artificial intelligence system and, in the event that personal data is collected through web scraping activities, to clarify how users registered and those not registered to the service have been or are informed about the processing of their data.

Within 20 days, companies must provide the Authority with the requested information.

Rome, January 28, 2025


Italy’s privacy regulator goes after DeepSeek: https://www.politico.eu/article/italys-privacy-regulator-goe...


Irrelevant since it's all closed anyway.

Who cares if they claimed they trained it using only a paper cup and two paper-clips. Their model works exactly as well as it works, and you're not reproducing their work either way.


The model works, sure, but that's not the point. OpenAI and others can replicate any technical advances for future models moving forward. DeepSeek R1 will be "outdated" in 6-12 months and be replaced by other, more powerful models for sure. If DeepSeek's success is based on stolen data, they won't be able to replicate their success as for sure such security gaps will now get closed.

There have been talks about a laissez-faire attitude regarding cybersecurity at OpenAI for a long time.... but this is surely coming to an end now. Same at Google.


I want to believe DeepSeek R1 is legit… but the more details emerge, the more it feels like something isn’t right.

The claim that R1 was trained for under $6M on 2,048 H800 GPUs always seemed suspicious. Efficient training techniques can cut costs, sure—but when OpenAI, Google, and Meta are all burning hundreds of millions to reach similar benchmarks, it’s hard to accept that DeepSeek did it for pennies on the dollar. Then Alexandr Wang casually drops that they actually have 50,000 H100 GPUs… what happened to that “low-cost” narrative? If this is true, it's not efficiency—it’s just access to massive hidden compute.

The stolen OpenAI data theory is another red flag. OpenAI researchers have been hit by multiple security breaches in the last few years, and now we have a former OpenAI engineer found dead under very weird circumstances. Coincidence? Maybe. But corporate espionage in AI isn’t some sci-fi plot—it’s very real, and China has been caught running large-scale operations before (Google exfiltration cases, the ASML trade secret theft, etc.).

And then there’s the CCP-backed propaganda angle. This part is almost too predictable—China hypes up a “homegrown” breakthrough, gets state media to push it as “proof” they’ve surpassed the West, then quietly blocks foreign scrutiny. Lei pointed out that DeepSeek won’t even let U.S. phone numbers register. Why? If R1 is truly open-source and transparent, why limit access? We’ve seen this before with ByteDance, Alibaba, etc.—government-approved success stories that follow a controlled narrative.

But despite all that skepticism… R1 is real, and the performance numbers do exist. Whether they’re running stolen training data or smuggled GPUs, they’ve built something that competes with OpenAI’s o1. That’s still impressive. The question is how much of this is a real technological leap vs. how much is state-backed positioning and/or cutting corners.

So what happens next?

If DeepSeek is serious, they need outside audits—actual transparency, full datasets, external verification. Not just another “trust us” moment. The U.S. needs better export control enforcement… we’re seeing massive loopholes if China can stockpile 50K H100s despite all the restrictions. AI labs (OpenAI, Anthropic, etc.) need better security. If OpenAI’s data really did leak, this won’t be the last time. I don’t think R1 itself is a scam, but the surrounding story feels curated, opaque, and suspiciously convenient. Maybe DeepSeek has built something remarkable, but until they open the books, I can’t take their claims at face value.


There's a lot of thrashing about today on this subject. People have lost money and that's what happens. Spreading uncertainty may help them retrieve a few dollar and theories are free.



Aside further research no audit could claim anything reliable. They will write a report for those that pay them.

It certainly is a topic of trust, but there is no auditor here.


DeepSeek R1’s rise may be fueled by CCP-backed cyberespionage, illicit AI data theft, and a potential cover-up involving the death of former OpenAI researcher Suchir Balaji.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: