Newzvia

Artificial Intelligence | OpenAI Accuses DeepSeek of Stealing AI Model Technology

Pankaj Mukherjee, Senior Technology Correspondent

Pankaj Mukherjee

Senior Technology Correspondent · AI, startups & MeitY policy

3 min read

Quick summary

OpenAI has accused its Chinese competitor DeepSeek of using 'distillation techniques' to leverage U.S. AI models for its R1 chatbot, flagging business and national security risks to U.S. lawmakers on . This highlights growing global competition and intellectual property concerns in advanced artificial intelligence development.

OpenAI Accuses DeepSeek of Stealing AI Model Technology

OpenAI accused its Chinese competitor DeepSeek on , of employing sophisticated 'distillation techniques' to train its R1 chatbot, leveraging results from leading U.S. artificial intelligence models, including OpenAI's own ChatGPT.

What Happened / Key Details

OpenAI formally warned the U.S. House of Representatives Select Committee on China about DeepSeek's alleged use of 'distillation techniques'. This method, according to OpenAI, involves extracting and utilising outputs from advanced AI models, such as ChatGPT, to train DeepSeek's next-generation R1 chatbot. OpenAI highlighted that DeepSeek employees reportedly circumvented its access restrictions by using third-party routers and developing specific code to programmatically obtain model outputs. This practice is described by OpenAI as 'free-riding' on U.S. innovations.

Official Position / Company Statement

OpenAI conveyed to U.S. lawmakers that DeepSeek's actions pose both a business threat and national security risks. The company expressed particular concern over DeepSeek's chatbot reportedly censoring politically sensitive topics, and raised potential issues with the overriding of built-in safety features related to chemical and biological weapons development. Specific metrics regarding the extent of the alleged distillation were not disclosed.

Timeline / What's Next

These allegations by OpenAI, which build on previous claims regarding DeepSeek's R1 model launch in January 2025, underscore escalating geopolitical tensions in the artificial intelligence sector. This development could intensify scrutiny from U.S. lawmakers on intellectual property protection and the ethical deployment of AI technologies. There is speculation that DeepSeek might launch a new model around Lunar New Year on .

Context / Background

The global race in artificial intelligence, particularly concerning large language models (LLMs), has seen heightened competition and concerns over intellectual property and national security. 'Distillation techniques' involve transferring knowledge from a larger, complex "teacher" model to a smaller "student" model, allowing the student to mimic the teacher's capabilities more efficiently. While a legitimate technique, OpenAI's concern lies in its alleged illicit use to bypass proprietary protections and leverage advanced U.S. AI models.

Key Takeaways

  • OpenAI formally accused its Chinese competitor DeepSeek on , of using 'distillation techniques' to train its R1 chatbot.
  • DeepSeek allegedly leveraged results from leading U.S. AI models, including OpenAI's ChatGPT, by circumventing access restrictions.
  • OpenAI warned U.S. lawmakers of business threats and national security risks, highlighting DeepSeek's reported censorship and potential overriding of safety features.
  • The allegations contribute to ongoing geopolitical tensions and intellectual property concerns in the global AI sector.

People Also Ask

  • What are 'distillation techniques' in AI?
    Knowledge distillation is a machine learning technique where a smaller "student" model is trained to replicate the performance and outputs of a larger, more complex "teacher" model. This process, often used for model compression and efficiency, involves transferring the knowledge from the teacher model to the student model.

  • Why is 'free-riding' on AI models a concern for companies like OpenAI?
    'Free-riding' on AI models raises significant concerns about intellectual property theft and unfair competition. Companies like OpenAI invest billions in developing advanced AI, and the unauthorized use of their model outputs for training competitors' models can undermine these investments and erode their competitive advantage.

  • What national security risks did OpenAI cite regarding DeepSeek?
    OpenAI's warning to U.S. lawmakers highlighted national security risks, including the potential for censorship in DeepSeek's chatbot, which has reportedly censored politically sensitive topics. Concerns were also raised about DeepSeek potentially overriding built-in safety features related to the development of chemical and biological weapons.

  • When did OpenAI first raise concerns about DeepSeek?
    OpenAI has been raising concerns about DeepSeek's distillation efforts since the launch of DeepSeek's R1 model in January 2025. Following R1's release, both OpenAI and Microsoft claimed it had been partially trained on ChatGPT, contradicting DeepSeek's self-reported low training costs.

Last updated:

Newzvia·17 May 2026

Europe Unveils Detailed Plan for AI Rules

Europe has moved from talking about AI rules to outlining clear steps for putting them into action, publishing specific guidelines for its member countries. This move could indirectly shape how Indian tech firms approach AI safety and compliance if they work with European markets.
Read article
Newzvia·15 May 2026

EU Wants AI Builders to Prove Safety, Not Users

The European Parliament has proposed new rules that could make AI developers and companies responsible for harm caused by their high-risk systems. This move could change how AI is built and used, potentially impacting Indian tech firms and users.
Read article
Newzvia·12 May 2026

Google's Gemini Pro 1.5: Smarter AI for Businesses, Not Yet for All

Google DeepMind today launched Gemini Pro 1.5, an AI model that now understands text, images, sound, and video much better. It mainly targets large companies, raising questions about its accessibility and relevance for Indian startups and developers.
Read article
Newzvia·10 May 2026

OpenAI's GPT-6 Arrives with Multimodal Smarts, Proactive Help

OpenAI has launched GPT-6, its newest large language model, promising better understanding across text, images, and audio, plus new 'proactive' assistance. The announcement, however, was light on details for Indian users and developers.
Read article
Newzvia·7 May 2026

Google's Gemini Ultra 2.0: Smarter AI, But What About India?

Google has announced Gemini Ultra 2.0, its latest powerful AI model, claiming better understanding of text, images, and video in real-time. While this is a step forward for AI, details on its impact and availability for Indian users remain unconfirmed.
Read article
Newzvia·5 May 2026

G7 Nations Agree on Broad AI Rules, India Watches From Sidelines

Ministers from the G7 countries have announced a preliminary agreement on global AI governance principles, focusing on transparency and risk management. This move, while global in intent, means India isn't directly at the table for these early discussions.
Read article

More from categories

Business

View all

Technology

View all

Sports

View all