...

Logo Pasino du Havre - Casino-Hôtel - Spa
in partnership with
Logo Nextory

Is Europe ready to police AI? Supervision and sanctions start soon

Business • Aug 1, 2025, 5:00 AM
7 min de lecture
1

Significant changes in terms of oversight and penalties are round the corner for AI suppliers in Europe as new provisions of the EU’s AI Act enter into force from 2 August.

Here’s what will change month regarding the EU’s rulebook on AI, which has been in force exactly one year this month, but which has been implemented gradually.

National oversight

On 2 August, member states will have to notify the European Commission about which market surveillance authorities they appoint to oversee businesses’ compliance with the AI Act.

That means that providers of AI systems will face scrutiny as of then.

Euronews reported in May that with just three months to go until the early August deadline, it remained unclear in at least half of the member states which authority will be nominated.

The EU executive did not want to comment back in March on which countries are ready yet, but expectations are that member states that recently went through elections will be delayed in setting up these regulators. 

According to a Commission official, some notifications have now been received, and they are under consideration. 

Laura Lazaro Cabrera, Programme Director for Equity and Data at the Centre for Democracy and Technology, told Euronews that many member states are set to miss the 2 August deadline to appoint their regulators. 

She said it’s “crucial” that national authorities are appointed as soon as possible, and “that they are competent and properly resourced to oversee the broad range of risks posed by AI systems, including those to fundamental rights.”

Artur Bogucki, an associate researcher at the Centre for European Policy Studies (CEPS), echoed the likely delays. 

“This isn't surprising when you consider the sheer complexity of what's required. Countries need to establish market surveillance authorities, set up notifying bodies, define sanction regimes, and somehow find staff with expertise spanning AI, data computing, cybersecurity, fundamental rights, and sector-specific knowledge. That's a tall order in today's competitive tech talent market,” he said. 

Bogucki said it doesn’t stop there, because it remains to be seen how multiple bodies at both EU and national levels need to coordinate together. 

“This complexity becomes even more challenging when you consider how the AI Act must interact with existing regulations like GDPR, the Digital Services Act, and the Digital Markets Act. We're already seeing potential for overlaps and conflicts, reminiscent of how different data protection authorities across Europe have taken divergent approaches to regulating tech companies,” he said.

Penalties

Also entering into force are provisions enabling penalties. Companies may be fined up to €35 million for breaches of the AI Act, or up to 7%of total worldwide annual turnover, whichever is higher. 

EU countries will need to adopt implementing laws that set out penalties for breaches and empower their authorities. For smaller companies, lower fines will apply. 

The AI Act sets a ceiling not a floor, for fines. According to Lazaro Cabrera, there is likely going to be “significant variability on how member states choose to fine their public authorities for non-compliance of the AI Act, if at all.”

She said that while there will be some divergence in how member states set the level of fines applicable, “forum-shopping in this context has its limits.”

“Ultimately market surveillance authorities have jurisdiction to act in connection to any product entering the EU market as a whole, and fines are only one of many tools at their disposal,” she said.

Bogucki said that the governance structure also needs to grapple with questions about prohibited AI practices, for example when it comes to biometric identification. 

“Different member states may have very different political appetites for enforcement in these areas, and without strong coordination mechanisms at the EU level, we could see the same fragmentation that has plagued GDPR enforcement,” he said. 

GPAI

Lastly, the rules on general purpose AI systems – which include large language models such as X’s Grok, Google’s Gemini, and OpenAI’s ChatGPT – will enter into force.

In July the Commission released a much-debated Code of Practice on GPAI. This voluntary set of rules that touches on transparency, copyright, and safety and security issues, aims to help providers of GPAI models comply with the AI Act.

The Commission has recently said that those who don’t sign can expect more scrutiny, whereas signatories are deemed compliant with the AI Act. But companies that sign the code will still need to comply with the AI rulebook.

US tech giant Meta said last week that it will not sign, having slammed the rules for stifling innovation, others like Google and OpenAI said they will sign up.

To make things more complicated, all the products that were placed on the market before 2 August have a two-year period to implement the rules, and all new tools launched after that date have to comply straight away.

The EU AI Act continues to roll out in phases, each with new obligations for providers and deployers. Two years from now, on 2 August 2027, the AI Act will be applicable in full.