Newzvia

Artificial Intelligence | EU Wants AI Builders to Prove Safety, Not Users

Pankaj Mukherjee, Senior Technology Correspondent

Pankaj Mukherjee

Senior Technology Correspondent · AI, startups & MeitY policy

3 min read

Quick summary

The European Parliament has proposed new rules that could make AI developers and companies responsible for harm caused by their high-risk systems. This move could change how AI is built and used, potentially impacting Indian tech firms and users.

For years, if an AI system caused trouble, proving who was at fault was a nightmare. Often, the person harmed had to show the AI was flawed. Now, that could change.

The European Parliament introduced a new set of rules today. These rules aim to make it easier to claim money for damages from powerful, risky AI systems. Critically, these new rules could shift who has to prove what.

Shifting Blame

In certain cases, AI developers or companies using these systems might have to prove their AI was safe. This is a big deal. Instead of you proving the AI was wrong, they might have to prove it was right.

The European Commission backs this move. It’s about protecting people better. It also clarifies who is responsible when complex AI systems cause harm.

What are “high-risk AI systems”? Think of AI used in medical tools or self-driving cars. These are areas where mistakes can be very serious.

This idea of shifting responsibility isn't entirely new globally. Yesterday, G7 digital ministers — top tech officials from major rich countries — also talked about trustworthy AI. They stressed transparency and safety. They want international rules that work well together, called “interoperable standards.”

The India Question

But here's the thing — this isn't just about Europe. Many global tech giants develop AI that is used here in India. If they face stricter checks in Europe, it might influence how they build AI everywhere.

This could mean safer AI for Indian users too, which is good. However, it might also make AI more expensive to develop. Indian startups building their own AI systems, or those using global models, might need to think about future liability rules here.

Worth noting: India's own Ministry of Electronics and Information Technology (MeitY) has been discussing AI policies. Will they look at similar ways to make AI builders more accountable?

The UK also recently weighed in on AI. Their Competition and Markets Authority (CMA) published draft guidelines. These focused on preventing big companies from controlling too much of the AI market. They want fair access to powerful computers and data needed to build large language models (like ChatGPT) and other basic AI systems.

What's Missing?

The proposed EU rules sound strong. But the fine print will matter. What exactly counts as a “high-risk” AI system? How will “damages” be fully defined? These details will shape the real impact.

For now, it’s a clear signal. Governments are starting to demand more from those who create and deploy AI. It's no longer just about amazing tech; it's about who pays when things go wrong.

Key Takeaways

  • Europe wants to make AI developers more responsible for harm their systems cause.
  • The new rules could shift the 'burden of proof' — meaning developers might have to prove their AI is safe.
  • This policy from the European Parliament could set a precedent for global AI rules, affecting how AI is built and used in India too.

People also ask

What is the EU AI liability directive?
Simplify legal claims for AI damage by shifting proof burden to developers in the EU.
How does 'burden of proof' apply here?
Under new rules, AI developers or companies may need to prove their system wasn't at fault when harm claims emerge, shifting the onus from users.
Does this affect India?
Yes — Global AI policies influence tech development and adoption, impacting services and future regulations for India.
What's next for these rules?
These proposed rules from the European Parliament await final debate and approval before becoming law.
Newzvia·12 May 2026

Google's Gemini Pro 1.5: Smarter AI for Businesses, Not Yet for All

Google DeepMind today launched Gemini Pro 1.5, an AI model that now understands text, images, sound, and video much better. It mainly targets large companies, raising questions about its accessibility and relevance for Indian startups and developers.
Read article
Newzvia·10 May 2026

OpenAI's GPT-6 Arrives with Multimodal Smarts, Proactive Help

OpenAI has launched GPT-6, its newest large language model, promising better understanding across text, images, and audio, plus new 'proactive' assistance. The announcement, however, was light on details for Indian users and developers.
Read article
Newzvia·7 May 2026

Google's Gemini Ultra 2.0: Smarter AI, But What About India?

Google has announced Gemini Ultra 2.0, its latest powerful AI model, claiming better understanding of text, images, and video in real-time. While this is a step forward for AI, details on its impact and availability for Indian users remain unconfirmed.
Read article
Newzvia·5 May 2026

G7 Nations Agree on Broad AI Rules, India Watches From Sidelines

Ministers from the G7 countries have announced a preliminary agreement on global AI governance principles, focusing on transparency and risk management. This move, while global in intent, means India isn't directly at the table for these early discussions.
Read article
Newzvia·2 May 2026

Google DeepMind's Gemini Pro 1.5: A Closer Look

Google DeepMind just launched Gemini Pro 1.5, a major upgrade to its AI model. It promises to understand huge amounts of data and different types of information, but its real impact for Indian users remains to be seen.
Read article
Newzvia·30 Apr 2026

Google's Gemini Ultra 2.0: More Powerful, For Whom?

Google DeepMind has unveiled Gemini Ultra 2.0, their latest and most advanced generative AI model, featuring enhanced reasoning across various media types and new tools for businesses. For Indian users and developers, the immediate impact remains to be seen, with a focus on enterprise integration over wider public access.
Read article

More from categories

Business

View all

Technology

View all

Sports

View all