Newzvia

Artificial Intelligence | OpenAI Accuses DeepSeek of Stealing AI Model Technology

Pankaj Mukherjee, Senior Technology Correspondent

Pankaj Mukherjee

Senior Technology Correspondent · AI, startups & MeitY policy

3 min read

Quick summary

OpenAI has accused its Chinese competitor DeepSeek of using 'distillation techniques' to leverage U.S. AI models for its R1 chatbot, flagging business and national security risks to U.S. lawmakers on . This highlights growing global competition and intellectual property concerns in advanced artificial intelligence development.

OpenAI Accuses DeepSeek of Stealing AI Model Technology

OpenAI accused its Chinese competitor DeepSeek on , of employing sophisticated 'distillation techniques' to train its R1 chatbot, leveraging results from leading U.S. artificial intelligence models, including OpenAI's own ChatGPT.

What Happened / Key Details

OpenAI formally warned the U.S. House of Representatives Select Committee on China about DeepSeek's alleged use of 'distillation techniques'. This method, according to OpenAI, involves extracting and utilising outputs from advanced AI models, such as ChatGPT, to train DeepSeek's next-generation R1 chatbot. OpenAI highlighted that DeepSeek employees reportedly circumvented its access restrictions by using third-party routers and developing specific code to programmatically obtain model outputs. This practice is described by OpenAI as 'free-riding' on U.S. innovations.

Official Position / Company Statement

OpenAI conveyed to U.S. lawmakers that DeepSeek's actions pose both a business threat and national security risks. The company expressed particular concern over DeepSeek's chatbot reportedly censoring politically sensitive topics, and raised potential issues with the overriding of built-in safety features related to chemical and biological weapons development. Specific metrics regarding the extent of the alleged distillation were not disclosed.

Timeline / What's Next

These allegations by OpenAI, which build on previous claims regarding DeepSeek's R1 model launch in January 2025, underscore escalating geopolitical tensions in the artificial intelligence sector. This development could intensify scrutiny from U.S. lawmakers on intellectual property protection and the ethical deployment of AI technologies. There is speculation that DeepSeek might launch a new model around Lunar New Year on .

Context / Background

The global race in artificial intelligence, particularly concerning large language models (LLMs), has seen heightened competition and concerns over intellectual property and national security. 'Distillation techniques' involve transferring knowledge from a larger, complex "teacher" model to a smaller "student" model, allowing the student to mimic the teacher's capabilities more efficiently. While a legitimate technique, OpenAI's concern lies in its alleged illicit use to bypass proprietary protections and leverage advanced U.S. AI models.

Key Takeaways

  • OpenAI formally accused its Chinese competitor DeepSeek on , of using 'distillation techniques' to train its R1 chatbot.
  • DeepSeek allegedly leveraged results from leading U.S. AI models, including OpenAI's ChatGPT, by circumventing access restrictions.
  • OpenAI warned U.S. lawmakers of business threats and national security risks, highlighting DeepSeek's reported censorship and potential overriding of safety features.
  • The allegations contribute to ongoing geopolitical tensions and intellectual property concerns in the global AI sector.

People Also Ask

  • What are 'distillation techniques' in AI?
    Knowledge distillation is a machine learning technique where a smaller "student" model is trained to replicate the performance and outputs of a larger, more complex "teacher" model. This process, often used for model compression and efficiency, involves transferring the knowledge from the teacher model to the student model.

  • Why is 'free-riding' on AI models a concern for companies like OpenAI?
    'Free-riding' on AI models raises significant concerns about intellectual property theft and unfair competition. Companies like OpenAI invest billions in developing advanced AI, and the unauthorized use of their model outputs for training competitors' models can undermine these investments and erode their competitive advantage.

  • What national security risks did OpenAI cite regarding DeepSeek?
    OpenAI's warning to U.S. lawmakers highlighted national security risks, including the potential for censorship in DeepSeek's chatbot, which has reportedly censored politically sensitive topics. Concerns were also raised about DeepSeek potentially overriding built-in safety features related to the development of chemical and biological weapons.

  • When did OpenAI first raise concerns about DeepSeek?
    OpenAI has been raising concerns about DeepSeek's distillation efforts since the launch of DeepSeek's R1 model in January 2025. Following R1's release, both OpenAI and Microsoft claimed it had been partially trained on ChatGPT, contradicting DeepSeek's self-reported low training costs.

Last updated:

Newzvia·27 Apr 2026

EU Finalizes AI Act Rules: What It Means for India

The European Union just set detailed rules for its landmark AI Act, which will be fully enforced by late . This move will affect how Indian companies build and use AI systems for global markets.
Read article
Newzvia·25 Apr 2026

Google DeepMind's Gemini 2.0: Smarter AI, Limited Access

Google DeepMind has launched Gemini 2.0, an updated AI that understands text, images, audio, and video better. However, it's only available to a select group of developers and businesses for now, leaving many Indian users waiting.
Read article
Newzvia·22 Apr 2026

Gemini Pro 1.5 Lands: Smarter AI, But What About India?

Google DeepMind has launched Gemini Pro 1.5, an upgraded large language model that can better understand videos and connect with other software. For Indian developers and businesses, the real impact depends on local availability and pricing, which remain unclear.
Read article
Newzvia·20 Apr 2026

Google's Gemini Nano Pro: AI on Your Phone, Not the Cloud

Google DeepMind just launched Gemini Nano Pro. This new AI model runs directly on smartphones and other devices, promising faster and more private AI features that could change how Indian users experience AI daily.
Read article
Newzvia·17 Apr 2026

Germany Details How It Will Enforce EU's AI Law

Germany just published its first national rules for enforcing the European Union's landmark AI Act. This move focuses on high-risk AI in critical sectors and will impact Indian companies working with Europe.
Read article
Newzvia·17 Apr 2026

Google DeepMind's Gemini 2.0: More Than Just Hype?

Google DeepMind launched Gemini 2.0, its new AI model, claiming it's better at understanding text, images, audio, and video. But for Indian users and developers, many important details, like local pricing and language support, are still missing.
Read article

More from categories

Business

View all

Technology

View all

Sports

View all